Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds end-to-end shielded transaction support: new shielded types, crypto provider and creation/processing logic, header serialization/deserialization, wallet/storage chain support and PIN propagation, send/prepare/sign changes, history/schema updates, many tests, and integration/docker adjustments. Changes
Sequence Diagram(s)sequenceDiagram
participant Wallet as HathorWallet
participant Storage as Storage
participant Crypto as ShieldedCryptoProvider
participant Network as Fullnode
rect rgba(100,200,100,0.5)
Note over Wallet,Network: Construct & send shielded transaction
Wallet->>Storage: getScanXPubKey/getSpendXPubKey / selectUtxos
Storage-->>Wallet: xpubs / UTXOs
Wallet->>Crypto: createShieldedOutputs(defs, inputBlinding...)
Crypto-->>Crypto: generate blinding factors, commitments, proofs
Crypto-->>Wallet: created outputs (commitments, proofs, ephemeralPubkey)
Wallet->>Network: sendTransaction(tx with ShieldedOutputsHeader)
Network-->>Wallet: tx accepted
end
rect rgba(100,100,200,0.5)
Note over Network,Wallet: Receive & process shielded transaction
Network-->>Wallet: onNewTx(tx with shielded header)
Wallet->>Storage: getScanXPrivKey(pinCode?)
Storage-->>Wallet: scan xpriv (or error)
Wallet->>Crypto: rewindAmount/rewindFull(scanPriv, ephemeralPubkey, ...)
Crypto-->>Wallet: decrypted value, tokenUid, blinding factors
Wallet->>Storage: processShieldedOutputs -> save UTXO/update balances
Storage-->>Wallet: processed outputs merged into history
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 16
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/types.ts (1)
277-291:⚠️ Potential issue | 🟠 MajorSplit
IHistoryOutputinto separate transparent and shielded shapes.The interface declares
value,token_data,script,decoded,token, andspent_byas required, but the comment correctly states that shielded entries (wheretype === 'shielded') have these "instead of value/token"—meaning they lack these fields entirely. This creates a type safety gap: TypeScript won't prevent code from accessingoutput.valueon a shielded entry.The codebase works around this with runtime guards (
isShieldedOutputEntry()checks) and unsafe casts (as any). Instead, use a discriminated union keyed bytype, or separate the types inIHistoryTx.outputsto reflect the actual mixed transparent/shielded reality until they're split (as already happens innew/wallet.ts). The Zod schema already makes these fields optional as a workaround; the TypeScript types should match.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 277 - 291, The IHistoryOutput should be split into a discriminated union so shielded entries are typed separately: create two interfaces (e.g., TransparentHistoryOutput with required fields value, token_data, script, decoded, token, spent_by and maybe selected_as_input, and ShieldedHistoryOutput with type: 'shielded' and the shielded-only fields commitment, range_proof, ephemeral_pubkey, asset_commitment, surjection_proof and without value/token fields), then change IHistoryOutput to TransparentHistoryOutput | ShieldedHistoryOutput and update usages such as IHistoryTx.outputs and any code using isShieldedOutputEntry() to rely on the discriminant instead of unsafe casts; also align the new Zod schema shape in new/wallet.ts to these types so TypeScript and runtime validation match.src/models/address.ts (1)
94-119:⚠️ Potential issue | 🟠 MajorCorrelate the payload length with the version byte.
validateAddress()currently accepts any known version byte with either 25-byte or 71-byte payloads. That lets a checksummed 71-byte payload with thep2pkh/p2shbyte, or a 25-byte payload with the shielded byte, pass validation, andgetType()will then classify it from the version byte alone.🛡️ Suggested guard
const firstByte = addressBytes[0]; if (!this.network.isVersionByteValid(firstByte)) { throw new AddressError( `${errorMessage} Invalid network byte. Expected: ${this.network.versionBytes.p2pkh}, ${this.network.versionBytes.p2sh}, or ${this.network.versionBytes.shielded} and received ${firstByte}.` ); } + + const isLegacyVersion = + firstByte === this.network.versionBytes.p2pkh || + firstByte === this.network.versionBytes.p2sh; + const isShieldedVersion = firstByte === this.network.versionBytes.shielded; + + if ( + (addressBytes.length === LEGACY_ADDR_LENGTH && !isLegacyVersion) || + (addressBytes.length === SHIELDED_ADDR_LENGTH && !isShieldedVersion) + ) { + throw new AddressError(`${errorMessage} Address length does not match its version byte.`); + }Also applies to: 136-149
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/models/address.ts` around lines 94 - 119, validateAddress() currently accepts any known version byte with either payload length; enforce that the version byte and payload length correlate: after computing firstByte (version byte) and addressBytes.length, check network.versionBytes.p2pkh and .p2sh require LEGACY_ADDR_LENGTH, and network.versionBytes.shielded requires SHIELDED_ADDR_LENGTH (or vice-versa if other version bytes exist), and if mismatch throw AddressError using the existing errorMessage; update the same guard in the later block that also validates version bytes (the code paths referenced by validateAddress, getType, LEGACY_ADDR_LENGTH, SHIELDED_ADDR_LENGTH, network.versionBytes, and AddressError).src/utils/storage.ts (1)
327-346:⚠️ Potential issue | 🔴 CriticalGuard index-limit mode when shielded derivation is unavailable.
lastLoadednow always usesshieldedLastLoadedAddressIndex, even for wallets wherederiveShieldedAddressFromStorage()will always returnnull. In that case the shielded cursor never advances from its sentinel value, socheckIndexLimit()keeps returning the same range and sync can loop forever.Suggested fix
- // If either chain is below the end index, load more. - // Use min so the lagging chain gets its addresses loaded. - const lastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex); + const hasShieldedKeys = !!(await storage.getAccessData())?.spendXpubkey; + // If shielded derivation is unavailable, only the legacy cursor can advance. + const lastLoaded = hasShieldedKeys + ? Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex) + : lastLoadedAddressIndex; if (lastLoaded < limits.endIndex) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/storage.ts` around lines 327 - 346, checkIndexLimit() uses shieldedLastLoadedAddressIndex unconditionally which causes an infinite loop when deriveShieldedAddressFromStorage() is unavailable; update the logic in checkIndexLimit (where storage.getWalletData(), isIndexLimitScanPolicy() and storage.getIndexLimit() are used) to only include shieldedLastLoadedAddressIndex when shielded derivation is supported (e.g., detect via deriveShieldedAddressFromStorage() or a flag from storage), otherwise fall back to using only lastLoadedAddressIndex so the nextIndex/count are computed from the non-shielded cursor and the scan advances correctly.
🧹 Nitpick comments (7)
src/new/types.ts (1)
330-344: Model shielded outputs as a separate variant.
shielded?: ShieldedOutputModedocuments the invariant, but it still allows a legacy/base58 address plusshielded: FULLY_SHIELDEDto type-check. Since this is a public API, a discriminated union would catch that misuse earlier.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/new/types.ts` around lines 330 - 344, ProposedOutput currently allows any address with an optional shielded flag so code can wrongly pair a legacy/base58 address with shielded modes; replace ProposedOutput with a discriminated union (e.g. LegacyOutput and ShieldedOutput) so the discriminator is the shielded field: LegacyOutput has shielded?: undefined and accepts legacy Address type, while ShieldedOutput has shielded: ShieldedOutputMode (or a specific literal like "AMOUNT_SHIELDED"|"FULLY_SHIELDED") and its address type is the ShieldedAddress type (or otherwise constrained); update usages expecting ProposedOutput to accept the union and ensure ShieldedOutput includes the same value/timelock/token properties.src/schemas.ts (1)
87-111: Consider using a discriminated union for stricter validation.Making all fields optional accommodates both shapes but loses validation strictness. A shielded entry could pass without required shielded fields (e.g.,
commitment), or a transparent output could pass withoutvalue.Consider using
z.discriminatedUnionon thetypefield to enforce shape-specific requirements:♻️ Suggested approach using discriminated union
const TransparentOutputSchema = z.object({ value: bigIntCoercibleSchema, token_data: z.number(), script: z.string(), decoded: IHistoryOutputDecodedSchema, token: z.string(), spent_by: z.string().nullable().optional(), selected_as_input: z.boolean().optional(), }).passthrough(); const ShieldedOutputSchema = z.object({ type: z.literal('shielded'), commitment: z.string(), range_proof: z.string(), ephemeral_pubkey: z.string(), asset_commitment: z.string(), surjection_proof: z.string(), }).passthrough(); export const IHistoryOutputSchema = z.union([ TransparentOutputSchema, ShieldedOutputSchema, ]) as ZodSchema<IHistoryOutput>;If the fullnode can return partially populated shielded entries or mixed shapes that don't fit a discriminated union, the current permissive approach is acceptable.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/schemas.ts` around lines 87 - 111, The schema IHistoryOutputSchema currently marks all fields optional which weakens validation and allows malformed transparent or shielded entries; update it to a discriminated union based on the type field by creating two schemas (e.g., TransparentOutputSchema and ShieldedOutputSchema) where TransparentOutputSchema requires fields like value (bigIntCoercibleSchema) and related transparent properties and ShieldedOutputSchema requires type: z.literal('shielded') and required shielded fields such as commitment, range_proof, ephemeral_pubkey, asset_commitment, surjection_proof, then export IHistoryOutputSchema as z.union([TransparentOutputSchema, ShieldedOutputSchema]) (retain .passthrough() if needed) so each shape is strictly validated.src/utils/transaction.ts (1)
83-84: Preferundefinedovernullfor optional state here.These new
nullsentinel types go against the repo-wide TS migration rule and make the helper/cache state less consistent with the rest of the module.Based on learnings, avoid using null to indicate optional/unprovided parameters in TypeScript; use undefined and the
?optional syntax instead.Also applies to: 193-194
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/transaction.ts` around lines 83 - 84, The helper is using null as a sentinel for optional state; update the parameter and checks to use undefined instead: change the type annotations from `{ type?: string | null }` to `{ type?: string }` in isShieldedOutputEntry and the other occurrence around the code flagged (lines ~193-194), and update any strict equality checks or guards that compare to null to instead check for undefined or use truthiness (`output.type !== undefined` or `!!output.type`) so the module follows the repo-wide TS convention of using undefined for optional/unprovided values.__tests__/integration/configuration/docker-compose.yml (1)
11-11: Pin these shielded test images by digest to ensure reproducible integration environments.Both new tags are mutable prerelease-style labels. If either image is republished, the integration environment can change underneath the same commit and make shielded test failures harder to reproduce locally or in CI. The images can be pinned by their digests instead:
hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1@sha256:c1d372698f0b549dbb2e2e6955ae27ec1f9cf728948c6d758651636b4c3ff5f5hathornetwork/tx-mining-service:shielded-outputs-v1@sha256:9e78ce25741fa07b092c5ab55ad18d6f76f40a28d1b28e6580bb7fd133d511b4This applies to lines 11 and 45.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/integration/configuration/docker-compose.yml` at line 11, Replace the mutable image tags with their content-addressable digests to make integration tests reproducible: update the value currently using "hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1" (referenced via ${HATHOR_LIB_INTEGRATION_TESTS_FULLNODE_IMAGE:-...}) to "hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1@sha256:c1d372698f0b549dbb2e2e6955ae27ec1f9cf728948c6d758651636b4c3ff5f5" and likewise pin "hathornetwork/tx-mining-service:shielded-outputs-v1" to "hathornetwork/tx-mining-service:shielded-outputs-v1@sha256:9e78ce25741fa07b092c5ab55ad18d6f76f40a28d1b28e6580bb7fd133d511b4" so the Docker Compose entries for those images are fixed by digest.src/storage/memory_store.ts (1)
469-476: Consider defining a proper type for shielded output detection.The cast
(output as any).type !== 'shielded'works but bypasses type safety. Consider extending the output type inIHistoryTxor using a type guard function for cleaner detection.♻️ Example type guard
// In a types or utils file function isShieldedHistoryOutput(output: unknown): boolean { return (output as { type?: string }).type === 'shielded'; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/storage/memory_store.ts` around lines 469 - 476, The code uses an unsafe cast "(output as any).type !== 'shielded'" to detect shielded outputs; instead add a proper type or type guard and use it: extend the output type in IHistoryTx (or create a dedicated IHistoryOutput with optional "type?: string") and replace the any-cast check with a type guard function (e.g., isShieldedHistoryOutput(output): boolean) or a typed property check so the condition in the loop uses the guard (or the typed property) to determine shielded outputs rather than casting to any; update references in the loop around the variables "output", "IHistoryTx", and the file "memory_store.ts" to call the new guard/typed property.src/models/shielded_output.ts (1)
44-64: Consider using an options object for constructor parameters.The constructor has 9 positional parameters with
valuepositioned betweenassetCommitmentandsurjectionProof. This ordering is error-prone and the current signature makes it easy to accidentally swapassetCommitmentandvalue. Consider using a named options object pattern.♻️ Example refactor using options object
- constructor( - mode: ShieldedOutputMode, - commitment: Buffer, - rangeProof: Buffer, - tokenData: number, - script: Buffer, - ephemeralPubkey: Buffer, - assetCommitment?: Buffer, - value: OutputValueType = 0n, - surjectionProof?: Buffer, - ) { + constructor(opts: { + mode: ShieldedOutputMode; + commitment: Buffer; + rangeProof: Buffer; + tokenData: number; + script: Buffer; + ephemeralPubkey: Buffer; + assetCommitment?: Buffer; + surjectionProof?: Buffer; + value?: OutputValueType; + }) { + this.mode = opts.mode; + this.commitment = opts.commitment; + // ... etc + this.value = opts.value ?? 0n;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/models/shielded_output.ts` around lines 44 - 64, The constructor on class ShieldedOutput accepts many positional args (mode, commitment, rangeProof, tokenData, script, ephemeralPubkey, assetCommitment, value, surjectionProof) which is error-prone (especially assetCommitment vs value); refactor the constructor to accept a single options object (e.g., { mode, commitment, rangeProof, tokenData, script, ephemeralPubkey, assetCommitment?, value?, surjectionProof? }) and update the constructor body to pull values from that object, update any call sites to pass a named options object, and keep the same property names on the instance (this.mode, this.commitment, this.assetCommitment, this.value, this.surjectionProof) to preserve behavior.src/storage/storage.ts (1)
83-91: Consider clearing pinCode after use for better security hygiene.Storing
pinCodein memory on the Storage instance means it persists for the lifetime of the wallet session. While this may be intentional for convenience during shielded output decryption, consider whether the pin should be cleared after processing completes or after a timeout, to reduce the window of exposure if the process memory is compromised.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/storage/storage.ts` around lines 83 - 91, The Storage class currently keeps the pinCode property in memory for the wallet lifetime; update the code so any method that consumes pinCode (e.g., the shielded output decryption routines that use shieldedCryptoProvider / the promise used to chain processing of unlocked UTXOs) clears this.pinCode when finished — use a try/finally around the decryption/processing logic to set this.pinCode = undefined (or null) in the finally block, and optionally add an async timeout-based clear (clear after X ms) where pinCode is set to reduce exposure.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__tests__/integration/hathorwallet_facade.test.ts`:
- Around line 77-78: The test currently asserts
expect(tx.nonce).toBeGreaterThan(0) which is flaky; change it to assert a
guaranteed mining side effect (for example, assert tx.hash is truthy or the
transaction status/confirmation exists) and only perform a type check on the
nonce. Specifically, update the assertions after sendTx.runFromMining() to
something like: assert tx.hash (or tx.status/confirmation) is present to prove
it was mined, and replace the strict numeric assertion with a type check like
typeof tx.nonce === 'number' (referencing sendTx.runFromMining and tx.nonce).
In `@__tests__/shielded/processing.test.ts`:
- Around line 156-176: The test currently mocks getScanXPrivKey to reject, which
prevents calling provider.rewindAmountShieldedOutput; change the storage mock
for getScanXPrivKey in this test to resolve with a valid scan xpriv (e.g. a
dummy string) so processShieldedOutputs will invoke rewindAmountShieldedOutput,
then assert that storage.logger.debug (or the logger used inside
processShieldedOutputs) was called when rewind throws; update the expectations
to check for the debug log instead of relying on an empty branch that never
executes.
- Around line 179-210: The test currently has both wallet-owned addresses fail
by making storage.getScanXPrivKey always reject, so the provider decryption path
is never exercised; update the test to have getScanXPrivKey resolve for one
owned address (e.g., return a mock xpriv for bip32AddressIndex 0) and continue
rejecting for the other (index 2), and mock the provider's rewind/decryption
method (the provider used in processShieldedOutputs) to return a successful
rewind result for the address that gets a valid key so the test exercises mixed
filtering and decryption behavior; keep storage.getAddressInfo as-is and ensure
expectations assert the successful decrypted output alongside excluded/failed
ones accordingly.
In `@src/headers/shielded_outputs.ts`:
- Around line 92-105: If `mode` is not one of the handled enums, stop parsing
instead of falling through: in the ShieldedOutput parsing branch (where
ShieldedOutputMode is checked and tokenData, assetCommitment and surjectionProof
are set) add an explicit else that throws an Error (e.g. `throw new
Error(\`Unknown ShieldedOutputMode: ${mode}\`)`) so you don't consume the
ephemeral pubkey or misalign `buf` and you fail fast when an unexpected mode is
encountered; ensure this change is made alongside the existing checks that set
tokenData, assetCommitment, and surjectionProof before constructing the
ShieldedOutput.
- Around line 53-125: The deserialize method for ShieldedOutputsHeader accepts
truncated buffers because it uses buf.subarray() without length checks; update
ShieldedOutputsHeader.deserialize to validate buffer length before every
fixed-size read (33-byte commitments, 33-byte ephemeralPubkey, 33-byte
assetCommitment) and after each unpackToInt for length-prefixed fields (range
proof rpLen, scriptLen, surjection proof spLen) ensure rpLen/scriptLen/spLen <=
buf.length before slicing, and throw an Error on insufficient bytes; also
validate that numOutputs is feasible given remaining buf length (or check
minimal per-output remaining bytes inside the loop) so ShieldedOutput
construction only happens when all required bytes are present.
In `@src/models/transaction.ts`:
- Around line 368-371: The loop summing shielded outputs uses the mutable field
this.shieldedOutputs which isn’t rebuilt during deserialization; update
deserialization or the summation to use the canonical shielded headers instead.
Specifically, ensure createFromBytes() and getHeadersFromBytes() populate
this.shieldedOutputs (or remove reliance on it) so methods like the
weight/validation loop that uses sumOutputs and the shieldedOutputs array (and
the similar block around lines 417-420) either derive values directly from the
parsed shielded header structures or have the array hydrated from the serialized
headers before being read.
In `@src/new/wallet.ts`:
- Around line 837-845: getCurrentAddress returns shielded addresses when
opts?.legacy === false but still always calls getAddressPathForIndex(), which
builds transparent P2PKH/P2SH paths; update the call so the returned addressPath
matches the address type: detect legacy vs shielded via opts?.legacy (or by
checking the address format after getCurrentAddress), and call a shielded-path
builder instead of getAddressPathForIndex for non-legacy addresses — e.g., add
or use a method like getShieldedAddressPathForIndex(index) or extend
getAddressPathForIndex to accept a flag (legacy: boolean) and return the correct
chain path for shielded addresses, then return that path in getCurrentAddress
(same change required for the other occurrence at lines 852-855).
- Around line 903-923: Add an early-exit check in _drainLiveQueue to avoid
processing after the wallet has been closed: at the start of _drainLiveQueue
verify walletStopped (or state === CLOSED) and return immediately if true so you
never call onNewTx on a closed wallet, and perform the same check before calling
_scheduleLiveQueueDrain() (or before rescheduling) to avoid restarting drains
after stop() has been called; ensure _inLiveDrain/_liveQueueDraining state is
set appropriately when exiting early to keep invariants consistent.
- Around line 1538-1563: The code extracts shielded entries into shieldedEntries
and then replaces newTx.outputs with transparentOutputs, which shifts on-chain
output indexes; instead, keep newTx.outputs unchanged and only assign
newTx.shielded_outputs = shieldedEntries (i.e., remove the reassignment
newTx.outputs = transparentOutputs), or if you must mark removed entries
preserve array length by leaving placeholders so indices remain stable; update
the block that builds shieldedEntries (uses
transactionUtils.isShieldedOutputEntry, shieldedEntries, transparentOutputs,
newTx.shielded_outputs, newTx.outputs) to stop compacting/reindexing the
outputs.
In `@src/shielded/provider.ts`:
- Around line 29-35: The createAmountShieldedOutput function (and the similar
call sites mentioned) currently only checks value > Number.MAX_SAFE_INTEGER but
still allows negative or out-of-range BigInt values that will be rounded when
cast to Number; add a shared helper (e.g., assertBigIntInSafeRange or
ensureSafeBigIntToNumber) and use it wherever you call Number(value) — the
helper should verify the input is a BigInt, is >= 0 (reject negatives), and <=
Number.MAX_SAFE_INTEGER, and throw a clear error otherwise; replace the inlined
check in createAmountShieldedOutput and the other instances (the
ct.createAmountShieldedOutput callers at the noted locations) to call this
helper and then cast with Number(...) only after validation.
In `@src/types.ts`:
- Around line 652-654: The interface exposes pinCode?: string but
setPinCode(pinCode: string) forces a value; change setPinCode to accept an
optional parameter (setPinCode(pinCode?: string): void) so callers can clear the
cached PIN by calling setPinCode() with no args; update any implementing
classes/functions that declare setPinCode to match the signature (and ensure
they handle undefined to clear the stored pin) — target the setPinCode method
and any implementations that reference pinCode.
- Around line 715-719: Change the getSpendXPrivKey signature to return
Promise<string | undefined> to reflect optional spendMainKey in
IWalletAccessData; update the declaration of getSpendXPrivKey() in the interface
alongside the existing getScanXPrivKey(), getScanXPubKey(), and
getSpendXPubKey() declarations so callers can statically handle missing keys
(ensure any implementing classes and usages are updated to satisfy the new
Promise<string | undefined> return type).
In `@src/utils/bigint.ts`:
- Around line 34-38: When the reviver runs in Hermes and context?.source is
missing, do not silently return numeric values; instead detect numeric values
that are outside Number.isSafeInteger and immediately throw a descriptive error
to fail fast. Update the reviver in src/utils/bigint.ts (the branch that
currently does "if (!context?.source) { return value; }") to check if typeof
value === 'number' and !Number.isSafeInteger(value) and throw a clear error
mentioning the unsafe integer and the reviver/context, otherwise return the
value as before. This ensures bigint-valued fields are not silently corrupted
when the reviver context is unavailable.
In `@src/utils/storage.ts`:
- Around line 377-400: The code still uses shieldedLastLoadedAddressIndex in
calculations even when shielded keys are disabled, causing minLastLoaded and
shieldedTarget to be wrong (e.g., -1) and forcing re-fetch from index 0; fix by
conditioning those calculations on hasShieldedKeys: when hasShieldedKeys is
false, treat shieldedLastLoadedAddressIndex as equal to lastLoadedAddressIndex
(or simply ignore the shielded chain) so shieldedTarget and minLastLoaded only
reflect active chains; update the computations for shieldedTarget,
minLastLoaded, nextIndex, and count (used by loadAddresses) to use the adjusted
value so gap-extension starts at the correct suffix rather than at 0.
In `@src/utils/transaction.ts`:
- Around line 668-695: The shielded output builder in the
txData.shieldedOutputs.map block currently only validates common crypto fields;
update the validation to also check mode-specific fields before constructing
ShieldedOutput: if so.mode === 'FULLY_SHIELDED' (or the enum value used) ensure
so.assetCommitment and so.surjectionProof are present and throw a clear Error if
missing. Modify the map in the same function where ShieldedOutput is
instantiated to perform this additional check (alongside the existing checks for
commitment/rangeProof/script/ephemeralPubkey) so malformed FULLY_SHIELDED
entries fail fast with a descriptive message.
In `@src/utils/wallet.ts`:
- Around line 623-631: The code is deriving and persisting single-sig shielded
keys even for multisig wallets; update the guard around the derivation (the
block using argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH) that sets
accessData.spendXpubkey and accessData.spendMainKey via encryptData) to skip
this entire derivation when walletType === WalletType.MULTISIG (or equivalent
multisig indicator), so multisig wallets never get single-sig shielded keys
stored; apply the same check to the other similar branch mentioned (the other
place that derives shielded keys).
---
Outside diff comments:
In `@src/models/address.ts`:
- Around line 94-119: validateAddress() currently accepts any known version byte
with either payload length; enforce that the version byte and payload length
correlate: after computing firstByte (version byte) and addressBytes.length,
check network.versionBytes.p2pkh and .p2sh require LEGACY_ADDR_LENGTH, and
network.versionBytes.shielded requires SHIELDED_ADDR_LENGTH (or vice-versa if
other version bytes exist), and if mismatch throw AddressError using the
existing errorMessage; update the same guard in the later block that also
validates version bytes (the code paths referenced by validateAddress, getType,
LEGACY_ADDR_LENGTH, SHIELDED_ADDR_LENGTH, network.versionBytes, and
AddressError).
In `@src/types.ts`:
- Around line 277-291: The IHistoryOutput should be split into a discriminated
union so shielded entries are typed separately: create two interfaces (e.g.,
TransparentHistoryOutput with required fields value, token_data, script,
decoded, token, spent_by and maybe selected_as_input, and ShieldedHistoryOutput
with type: 'shielded' and the shielded-only fields commitment, range_proof,
ephemeral_pubkey, asset_commitment, surjection_proof and without value/token
fields), then change IHistoryOutput to TransparentHistoryOutput |
ShieldedHistoryOutput and update usages such as IHistoryTx.outputs and any code
using isShieldedOutputEntry() to rely on the discriminant instead of unsafe
casts; also align the new Zod schema shape in new/wallet.ts to these types so
TypeScript and runtime validation match.
In `@src/utils/storage.ts`:
- Around line 327-346: checkIndexLimit() uses shieldedLastLoadedAddressIndex
unconditionally which causes an infinite loop when
deriveShieldedAddressFromStorage() is unavailable; update the logic in
checkIndexLimit (where storage.getWalletData(), isIndexLimitScanPolicy() and
storage.getIndexLimit() are used) to only include shieldedLastLoadedAddressIndex
when shielded derivation is supported (e.g., detect via
deriveShieldedAddressFromStorage() or a flag from storage), otherwise fall back
to using only lastLoadedAddressIndex so the nextIndex/count are computed from
the non-shielded cursor and the scan advances correctly.
---
Nitpick comments:
In `@__tests__/integration/configuration/docker-compose.yml`:
- Line 11: Replace the mutable image tags with their content-addressable digests
to make integration tests reproducible: update the value currently using
"hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1" (referenced
via ${HATHOR_LIB_INTEGRATION_TESTS_FULLNODE_IMAGE:-...}) to
"hathornetwork/hathor-core:experimental-shielded-outputs-alpha-v1@sha256:c1d372698f0b549dbb2e2e6955ae27ec1f9cf728948c6d758651636b4c3ff5f5"
and likewise pin "hathornetwork/tx-mining-service:shielded-outputs-v1" to
"hathornetwork/tx-mining-service:shielded-outputs-v1@sha256:9e78ce25741fa07b092c5ab55ad18d6f76f40a28d1b28e6580bb7fd133d511b4"
so the Docker Compose entries for those images are fixed by digest.
In `@src/models/shielded_output.ts`:
- Around line 44-64: The constructor on class ShieldedOutput accepts many
positional args (mode, commitment, rangeProof, tokenData, script,
ephemeralPubkey, assetCommitment, value, surjectionProof) which is error-prone
(especially assetCommitment vs value); refactor the constructor to accept a
single options object (e.g., { mode, commitment, rangeProof, tokenData, script,
ephemeralPubkey, assetCommitment?, value?, surjectionProof? }) and update the
constructor body to pull values from that object, update any call sites to pass
a named options object, and keep the same property names on the instance
(this.mode, this.commitment, this.assetCommitment, this.value,
this.surjectionProof) to preserve behavior.
In `@src/new/types.ts`:
- Around line 330-344: ProposedOutput currently allows any address with an
optional shielded flag so code can wrongly pair a legacy/base58 address with
shielded modes; replace ProposedOutput with a discriminated union (e.g.
LegacyOutput and ShieldedOutput) so the discriminator is the shielded field:
LegacyOutput has shielded?: undefined and accepts legacy Address type, while
ShieldedOutput has shielded: ShieldedOutputMode (or a specific literal like
"AMOUNT_SHIELDED"|"FULLY_SHIELDED") and its address type is the ShieldedAddress
type (or otherwise constrained); update usages expecting ProposedOutput to
accept the union and ensure ShieldedOutput includes the same
value/timelock/token properties.
In `@src/schemas.ts`:
- Around line 87-111: The schema IHistoryOutputSchema currently marks all fields
optional which weakens validation and allows malformed transparent or shielded
entries; update it to a discriminated union based on the type field by creating
two schemas (e.g., TransparentOutputSchema and ShieldedOutputSchema) where
TransparentOutputSchema requires fields like value (bigIntCoercibleSchema) and
related transparent properties and ShieldedOutputSchema requires type:
z.literal('shielded') and required shielded fields such as commitment,
range_proof, ephemeral_pubkey, asset_commitment, surjection_proof, then export
IHistoryOutputSchema as z.union([TransparentOutputSchema, ShieldedOutputSchema])
(retain .passthrough() if needed) so each shape is strictly validated.
In `@src/storage/memory_store.ts`:
- Around line 469-476: The code uses an unsafe cast "(output as any).type !==
'shielded'" to detect shielded outputs; instead add a proper type or type guard
and use it: extend the output type in IHistoryTx (or create a dedicated
IHistoryOutput with optional "type?: string") and replace the any-cast check
with a type guard function (e.g., isShieldedHistoryOutput(output): boolean) or a
typed property check so the condition in the loop uses the guard (or the typed
property) to determine shielded outputs rather than casting to any; update
references in the loop around the variables "output", "IHistoryTx", and the file
"memory_store.ts" to call the new guard/typed property.
In `@src/storage/storage.ts`:
- Around line 83-91: The Storage class currently keeps the pinCode property in
memory for the wallet lifetime; update the code so any method that consumes
pinCode (e.g., the shielded output decryption routines that use
shieldedCryptoProvider / the promise used to chain processing of unlocked UTXOs)
clears this.pinCode when finished — use a try/finally around the
decryption/processing logic to set this.pinCode = undefined (or null) in the
finally block, and optionally add an async timeout-based clear (clear after X
ms) where pinCode is set to reduce exposure.
In `@src/utils/transaction.ts`:
- Around line 83-84: The helper is using null as a sentinel for optional state;
update the parameter and checks to use undefined instead: change the type
annotations from `{ type?: string | null }` to `{ type?: string }` in
isShieldedOutputEntry and the other occurrence around the code flagged (lines
~193-194), and update any strict equality checks or guards that compare to null
to instead check for undefined or use truthiness (`output.type !== undefined` or
`!!output.type`) so the module follows the repo-wide TS convention of using
undefined for optional/unprovided values.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 480066df-3fde-4f7a-b670-fd2088d6a737
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (43)
__tests__/headers/shielded_outputs.test.ts__tests__/integration/configuration/docker-compose.yml__tests__/integration/configuration/privnet.yml__tests__/integration/hathorwallet_facade.test.ts__tests__/integration/shielded_transactions.test.ts__tests__/models/address.test.ts__tests__/models/network.test.ts__tests__/models/shielded_output.test.ts__tests__/shielded/processing.test.ts__tests__/storage/storage.test.ts__tests__/wallet/walletServiceStorageProxy.test.tspackage.jsonsetupTests-integration.jssrc/constants.tssrc/headers/parser.tssrc/headers/shielded_outputs.tssrc/headers/types.tssrc/lib.tssrc/models/address.tssrc/models/network.tssrc/models/shielded_output.tssrc/models/transaction.tssrc/new/sendTransaction.tssrc/new/types.tssrc/new/wallet.tssrc/schemas.tssrc/shielded/index.tssrc/shielded/processing.tssrc/shielded/provider.tssrc/shielded/types.tssrc/storage/memory_store.tssrc/storage/storage.tssrc/sync/stream.tssrc/types.tssrc/utils/address.tssrc/utils/bigint.tssrc/utils/shieldedAddress.tssrc/utils/storage.tssrc/utils/transaction.tssrc/utils/wallet.tssrc/wallet/types.tssrc/wallet/wallet.tssrc/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
- setupTests-integration.js
| await sendTx.runFromMining(); | ||
| // After mining and pushing, the hash should be set (it was null before mining) | ||
| expect(tx.hash).not.toBeNull(); | ||
| expect(typeof tx.nonce).toBe('number'); | ||
| expect(tx.nonce).toBeGreaterThan(0); |
There was a problem hiding this comment.
Line 78: avoid asserting nonce > 0.
A mined transaction can legitimately end up with nonce 0, so this makes the test flaky. Assert a guaranteed mining side effect instead and keep nonce as a type check only.
Suggested change
await sendTx.runFromMining();
- expect(tx.nonce).toBeGreaterThan(0);
+ expect(tx.nonce).toEqual(expect.any(Number));
+ expect(tx.parents.length).toBeGreaterThan(0);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| await sendTx.runFromMining(); | |
| // After mining and pushing, the hash should be set (it was null before mining) | |
| expect(tx.hash).not.toBeNull(); | |
| expect(typeof tx.nonce).toBe('number'); | |
| expect(tx.nonce).toBeGreaterThan(0); | |
| await sendTx.runFromMining(); | |
| expect(tx.nonce).toEqual(expect.any(Number)); | |
| expect(tx.parents.length).toBeGreaterThan(0); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__tests__/integration/hathorwallet_facade.test.ts` around lines 77 - 78, The
test currently asserts expect(tx.nonce).toBeGreaterThan(0) which is flaky;
change it to assert a guaranteed mining side effect (for example, assert tx.hash
is truthy or the transaction status/confirmation exists) and only perform a type
check on the nonce. Specifically, update the assertions after
sendTx.runFromMining() to something like: assert tx.hash (or
tx.status/confirmation) is present to prove it was mined, and replace the strict
numeric assertion with a type check like typeof tx.nonce === 'number'
(referencing sendTx.runFromMining and tx.nonce).
| // Hermes (React Native) doesn't support the context parameter in JSON.parse reviver. | ||
| // Fall back to returning the value as-is when context is unavailable. | ||
| if (!context?.source) { | ||
| return value; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's see the full bigint.ts file to understand the implementation
cat -n src/utils/bigint.tsRepository: HathorNetwork/hathor-wallet-lib
Length of output: 6318
🏁 Script executed:
# Search for usage of JSONBigInt to understand how it's called
rg "JSONBigInt|bigIntReviver" --type ts --type tsx -A 3 -B 1Repository: HathorNetwork/hathor-wallet-lib
Length of output: 102
🏁 Script executed:
# Look for tests related to this utility to understand expected behavior
fd -e test.ts -e spec.ts | xargs rg "bigint|JSONBigInt" -lRepository: HathorNetwork/hathor-wallet-lib
Length of output: 466
🏁 Script executed:
# Search for tests specifically related to bigint utilities
rg "bigint|bigIntReviver" __tests__/ -A 5 -B 1Repository: HathorNetwork/hathor-wallet-lib
Length of output: 12239
🏁 Script executed:
# Look for any Hermes or React Native references in the codebase
rg -i "hermes|react.native|context" --type ts -B 2 -A 2 | head -100Repository: HathorNetwork/hathor-wallet-lib
Length of output: 5568
🏁 Script executed:
# Check if there's any documentation or comments about unsafe integers
rg -i "unsafe|precision|MAX_SAFE|MIN_SAFE" --type ts -B 1 -A 1Repository: HathorNetwork/hathor-wallet-lib
Length of output: 6048
🏁 Script executed:
cat -n __tests__/utils/bigint.test.js.tsRepository: HathorNetwork/hathor-wallet-lib
Length of output: 5313
🏁 Script executed:
# Search for any Hermes/React Native specific test or handling
rg -i "hermes|react.native|fallback" --type tsRepository: HathorNetwork/hathor-wallet-lib
Length of output: 729
Fail fast on unsafe integers when reviver context is missing.
On line 36, Hermes will return the already-parsed number unchanged. Any integer outside the safe range loses precision during JSON.parse before the reviver runs, so bigint-valued fields become silently corrupted instead of failing deterministically.
💡 Minimal fix
// Hermes (React Native) doesn't support the context parameter in JSON.parse reviver.
// Fall back to returning the value as-is when context is unavailable.
if (!context?.source) {
+ if (Number.isInteger(value) && !Number.isSafeInteger(value)) {
+ throw new Error('JSONBigInt.parse cannot safely recover an unsafe integer in this runtime');
+ }
return value;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Hermes (React Native) doesn't support the context parameter in JSON.parse reviver. | |
| // Fall back to returning the value as-is when context is unavailable. | |
| if (!context?.source) { | |
| return value; | |
| } | |
| // Hermes (React Native) doesn't support the context parameter in JSON.parse reviver. | |
| // Fall back to returning the value as-is when context is unavailable. | |
| if (!context?.source) { | |
| if (Number.isInteger(value) && !Number.isSafeInteger(value)) { | |
| throw new Error('JSONBigInt.parse cannot safely recover an unsafe integer in this runtime'); | |
| } | |
| return value; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/bigint.ts` around lines 34 - 38, When the reviver runs in Hermes
and context?.source is missing, do not silently return numeric values; instead
detect numeric values that are outside Number.isSafeInteger and immediately
throw a descriptive error to fail fast. Update the reviver in
src/utils/bigint.ts (the branch that currently does "if (!context?.source) {
return value; }") to check if typeof value === 'number' and
!Number.isSafeInteger(value) and throw a clear error mentioning the unsafe
integer and the reviver/context, otherwise return the value as before. This
ensures bigint-valued fields are not silently corrupted when the reviver context
is unavailable.
| if (txData.shieldedOutputs && txData.shieldedOutputs.length > 0) { | ||
| const shieldedModels = txData.shieldedOutputs.map(so => { | ||
| if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) { | ||
| throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'); | ||
| } | ||
| const tokenData = this.getTokenDataFromOutput( | ||
| { | ||
| type: 'p2pkh', | ||
| token: so.token, | ||
| value: so.value, | ||
| authorities: 0n, | ||
| address: so.address, | ||
| timelock: null, | ||
| }, | ||
| txData.tokens, | ||
| ); | ||
|
|
||
| return new ShieldedOutput( | ||
| so.mode, | ||
| so.commitment, | ||
| so.rangeProof, | ||
| tokenData, | ||
| Buffer.from(so.script, 'hex'), | ||
| so.ephemeralPubkey, | ||
| so.assetCommitment, | ||
| so.value, | ||
| so.surjectionProof, | ||
| ); |
There was a problem hiding this comment.
Validate FULLY_SHIELDED-specific fields before building the model.
Only the shared fields are checked here. A FULLY_SHIELDED entry with missing assetCommitment or surjectionProof will still construct a ShieldedOutput and fail later when the tx is serialized or processed.
🧪 Suggested fix
const shieldedModels = txData.shieldedOutputs.map(so => {
if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) {
throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)');
}
+ if (
+ so.mode === ShieldedOutputMode.FULLY_SHIELDED &&
+ (!so.assetCommitment || !so.surjectionProof)
+ ) {
+ throw new Error(
+ 'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)'
+ );
+ }
const tokenData = this.getTokenDataFromOutput(
{
type: 'p2pkh',📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (txData.shieldedOutputs && txData.shieldedOutputs.length > 0) { | |
| const shieldedModels = txData.shieldedOutputs.map(so => { | |
| if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) { | |
| throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'); | |
| } | |
| const tokenData = this.getTokenDataFromOutput( | |
| { | |
| type: 'p2pkh', | |
| token: so.token, | |
| value: so.value, | |
| authorities: 0n, | |
| address: so.address, | |
| timelock: null, | |
| }, | |
| txData.tokens, | |
| ); | |
| return new ShieldedOutput( | |
| so.mode, | |
| so.commitment, | |
| so.rangeProof, | |
| tokenData, | |
| Buffer.from(so.script, 'hex'), | |
| so.ephemeralPubkey, | |
| so.assetCommitment, | |
| so.value, | |
| so.surjectionProof, | |
| ); | |
| if (txData.shieldedOutputs && txData.shieldedOutputs.length > 0) { | |
| const shieldedModels = txData.shieldedOutputs.map(so => { | |
| if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) { | |
| throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'); | |
| } | |
| if ( | |
| so.mode === ShieldedOutputMode.FULLY_SHIELDED && | |
| (!so.assetCommitment || !so.surjectionProof) | |
| ) { | |
| throw new Error( | |
| 'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)' | |
| ); | |
| } | |
| const tokenData = this.getTokenDataFromOutput( | |
| { | |
| type: 'p2pkh', | |
| token: so.token, | |
| value: so.value, | |
| authorities: 0n, | |
| address: so.address, | |
| timelock: null, | |
| }, | |
| txData.tokens, | |
| ); | |
| return new ShieldedOutput( | |
| so.mode, | |
| so.commitment, | |
| so.rangeProof, | |
| tokenData, | |
| Buffer.from(so.script, 'hex'), | |
| so.ephemeralPubkey, | |
| so.assetCommitment, | |
| so.value, | |
| so.surjectionProof, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/transaction.ts` around lines 668 - 695, The shielded output builder
in the txData.shieldedOutputs.map block currently only validates common crypto
fields; update the validation to also check mode-specific fields before
constructing ShieldedOutput: if so.mode === 'FULLY_SHIELDED' (or the enum value
used) ensure so.assetCommitment and so.surjectionProof are present and throw a
clear Error if missing. Modify the map in the same function where ShieldedOutput
is instantiated to perform this additional check (alongside the existing checks
for commitment/rangeProof/script/ephemeralPubkey) so malformed FULLY_SHIELDED
entries fail fast with a descriptive message.
| // Derive shielded spend key if root key is available | ||
| // Scan key uses the same account as legacy (m/44'/280'/0'/0), so no separate derivation needed. | ||
| if (argXpriv.depth === 0) { | ||
| const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH); | ||
| const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0); | ||
|
|
||
| accessData.spendXpubkey = spendXpriv2.xpubkey; | ||
| accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin); | ||
| } |
There was a problem hiding this comment.
Do not derive single-sig shielded keys for multisig wallets.
These branches also run for WalletType.MULTISIG, but m/44'/280'/1'/0 is a plain single-sig path unrelated to the wallet’s m/45' multisig account. Persisting it makes a multisig wallet look shielded-capable with keys that one signer controls alone, which breaks the wallet’s signing model if those paths are ever used.
🔒 Safer guard
- if (argXpriv.depth === 0) {
+ if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) {
const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0);
accessData.spendXpubkey = spendXpriv2.xpubkey;
accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin);
}- const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
- const spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+ let spendXpriv: HDPrivateKey | undefined;
+ if (walletType === WalletType.P2PKH) {
+ const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH);
+ spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0);
+ }
...
- spendXpubkey: spendXpriv.xpubkey,
- spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+ ...(spendXpriv && {
+ spendXpubkey: spendXpriv.xpubkey,
+ spendMainKey: encryptData(spendXpriv.xprivkey, pin),
+ }),Also applies to: 704-719
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/wallet.ts` around lines 623 - 631, The code is deriving and
persisting single-sig shielded keys even for multisig wallets; update the guard
around the derivation (the block using
argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH) that sets
accessData.spendXpubkey and accessData.spendMainKey via encryptData) to skip
this entire derivation when walletType === WalletType.MULTISIG (or equivalent
multisig indicator), so multisig wallets never get single-sig shielded keys
stored; apply the same check to the other similar branch mentioned (the other
place that derives shielded keys).
| @@ -4,18 +4,16 @@ services: | |||
| # All the following services are related to the core of the Private Network | |||
There was a problem hiding this comment.
TODO we must make it work without cpuminer, using the dev miner
- Fix addTx normalization so shielded outputs are processed during wallet
restart/history sync (not just websocket path)
- Split MAX_OUTPUTS validation: transparent (255) and shielded (32)
limits are independent, matching hathor-core
- Fix FullShielded serialization to throw on missing required fields
instead of silently using zero buffers
- Revert unnecessary enqueueOnNewTx live queue rewrite back to original
promise chain
- Revert unnecessary bigIntReviver Hermes workaround
- Extract shielded output creation logic into src/shielded/creation.ts
- Extract publicKeyToP2PKH helper, reuse in shieldedAddress.ts
- Deduplicate header serialization via serializeWith helper
- Reuse tokenUtils.getTokenIndexFromData in resolveTokenUid
- Split Zod output schema into Transparent | AmountShielded | FullShielded
- Add integration tests for wallet restart, FullShielded balance, mixed
modes, fee validation, and custom token shielded outputs (skipped with
known bug documented)
- Add TODO comments for IHistoryOutput type refactor, pinCode clearing,
and other follow-up items
- Improve code comments throughout for clarity
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
89a0a76 to
62612cc
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/new/sendTransaction.ts (1)
314-330:⚠️ Potential issue | 🟠 MajorUse the final fee when validating extra HTR inputs.
Now that shielded fees are added here, the
requiresFeesguard just above still checks the transparent-onlyfee. A custom-token shielded send with user-supplied HTR fee inputs will still raiseINVALID_INPUTeven whentotalFee > 0.🛠️ Proposed fix
- if (requiresFees.length > 0 && fee === 0n) { - const err = new SendTxError(ErrorMessages.INVALID_INPUT); - err.errorData = requiresFees; - throw err; - } - // Calculate shielded output fee let shieldedFee = 0n; for (const def of shieldedOutputDefs) { if (def.shieldedMode === ShieldedOutputMode.FULLY_SHIELDED) { shieldedFee += FEE_PER_FULL_SHIELDED_OUTPUT; @@ const totalFee = fee + shieldedFee; + + if (requiresFees.length > 0 && totalFee === 0n) { + const err = new SendTxError(ErrorMessages.INVALID_INPUT); + err.errorData = requiresFees; + throw err; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/new/sendTransaction.ts` around lines 314 - 330, The current validation uses the transparent-only fee variable (`fee`) to decide if HTR-paying inputs are required (`requiresFees`), but shielded output fees (computed via `shieldedOutputDefs` into `shieldedFee`) are added afterwards to produce `totalFee`; update the guard that throws `SendTxError(ErrorMessages.INVALID_INPUT)` to consider the combined `totalFee` instead of `fee` (i.e., compute `shieldedFee` first or compute `totalFee` before checking `requiresFees`) so that a nonzero `totalFee` prevents the invalid-input error when `requiresFees.length > 0`; keep the `err.errorData = requiresFees` behavior and the `SendTxError`/`ErrorMessages.INVALID_INPUT` usage.src/types.ts (1)
277-297:⚠️ Potential issue | 🟠 Major
IHistoryOutputstill encodes impossible required fields for shielded entries.Line 283 through Line 288 keep transparent-only fields required, but shielded entries can omit them. This leaves the public type unsound and encourages unsafe field access.
Proposed typing fix (discriminated union)
-export interface IHistoryOutput { - value: OutputValueType; - token_data: number; - script: string; - decoded: IHistoryOutputDecoded; - token: string; - spent_by: string | null; - selected_as_input?: boolean; - // Shielded output entries (appended by fullnode's to_json_extended) have these instead of value/token: - type?: string; // 'shielded' for shielded output entries - commitment?: string; // hex - range_proof?: string; // hex - ephemeral_pubkey?: string; // hex - asset_commitment?: string; // hex (FullShielded only) - surjection_proof?: string; // hex (FullShielded only) -} +export interface IHistoryTransparentOutput { + type?: undefined; + value: OutputValueType; + token_data: number; + script: string; + decoded: IHistoryOutputDecoded; + token: string; + spent_by: string | null; + selected_as_input?: boolean; +} + +export interface IHistoryShieldedOutputEntry { + type: 'shielded'; + commitment: string; + range_proof: string; + ephemeral_pubkey: string; + asset_commitment?: string; + surjection_proof?: string; + // explicitly absent on shielded entries from to_json_extended + value?: never; + token_data?: never; + script?: never; + decoded?: never; + token?: never; + spent_by?: never; +} + +export type IHistoryOutput = IHistoryTransparentOutput | IHistoryShieldedOutputEntry;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 277 - 297, The current IHistoryOutput mixes required transparent-only fields with shielded entries; change it to a discriminated union to make the types sound: define a common base (e.g., HistoryOutputBase) for shared optional fields, then create ITransparentOutput (no "type" field or type?: undefined) with required fields value, token_data, script, decoded, token, spent_by, and create IShieldedOutputEntry with type: 'shielded' and the shielded-specific fields (commitment, range_proof, ephemeral_pubkey, asset_commitment, surjection_proof) and omit the transparent-only fields; finally set IHistoryOutput = ITransparentOutput | IShieldedOutputEntry and ensure callers narrow on the discriminant (type === 'shielded') before accessing shielded-only fields or assume transparent fields otherwise.
♻️ Duplicate comments (5)
src/headers/shielded_outputs.ts (1)
53-109:⚠️ Potential issue | 🟠 MajorReject malformed shielded headers instead of slicing through them.
Every field here is read with
subarray()and no bounds checks, and unsupportedmodevalues still fall through to the ephemeral-pubkey read. A truncated or unknown payload can desyncbufand deserialize as a bogus header instead of failing fast. Apply the same guard to every fixed-size and length-prefixed read.🛠️ Suggested direction
let mode: number; [mode, buf] = unpackToInt(1, false, buf); + if ( + mode !== ShieldedOutputMode.AMOUNT_SHIELDED && + mode !== ShieldedOutputMode.FULLY_SHIELDED + ) { + throw new Error(`Unsupported shielded output mode: ${mode}`); + } // Commitment (33 bytes) + if (buf.length < 33) { + throw new Error('Truncated shielded output commitment'); + } const commitment = Buffer.from(buf.subarray(0, 33)); buf = buf.subarray(33);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/headers/shielded_outputs.ts` around lines 53 - 109, The deserialize function is vulnerable to out-of-bounds reads and silent desyncs; before every fixed-size read (33-byte commitments, ephemeralPubkey, assetCommitment) and before consuming variable-length fields (after unpackToInt for rpLen, scriptLen, spLen) assert buf.length >= requiredSize and throw an Error if not, and explicitly validate the mode read from unpackToInt against ShieldedOutputMode (handle AMOUNT_SHIELDED and FULLY_SHIELDED, otherwise throw) so unsupported modes do not fall through; update logic in deserialize, referencing getVertexHeaderIdFromBuffer, unpackToInt, ShieldedOutputMode, and the local variables commitment, rangeProof, script, assetCommitment, surjectionProof, ephemeralPubkey to perform these length checks and fail fast on malformed input.src/new/wallet.ts (1)
1477-1502:⚠️ Potential issue | 🔴 CriticalPreserve on-chain output indexes.
Replacing
newTx.outputswith a compacted transparent-only array renumbers every transparent output after the first shielded entry. The rest of the wallet still uses array position as the canonical output index, so mixed transactions will save/spend the wrong UTXOs.🛠️ Suggested fix
if (!newTx.shielded_outputs) { const shieldedEntries: IHistoryShieldedOutput[] = []; - const transparentOutputs: IHistoryOutput[] = []; for (const output of newTx.outputs) { if (transactionUtils.isShieldedOutputEntry(output)) { shieldedEntries.push({ mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1 commitment: output.commitment!, // already hex @@ surjection_proof: output.surjection_proof ? Buffer.from(output.surjection_proof, 'base64').toString('hex') : undefined, }); - } else { - transparentOutputs.push(output); } } if (shieldedEntries.length > 0) { newTx.shielded_outputs = shieldedEntries; - newTx.outputs = transparentOutputs; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/new/wallet.ts` around lines 1477 - 1502, The current code compacts newTx.outputs into transparentOutputs, which shifts array indexes and breaks UTXO indexing; instead, keep newTx.outputs intact and only extract shielded entries into newTx.shielded_outputs without reassigning newTx.outputs. Locate the block using newTx, shielded_outputs, outputs and transactionUtils.isShieldedOutputEntry and change the logic so you push shielded entries into shieldedEntries but do not replace newTx.outputs (or if you must remove shielded entries, replace them with a stable placeholder object that preserves the original array length/indexes) and then set newTx.shielded_outputs = shieldedEntries when non-empty.src/utils/storage.ts (1)
328-347:⚠️ Potential issue | 🟠 MajorIgnore shielded cursors when shielded derivation is disabled.
These calculations still fold
shieldedLastLoadedAddressIndexintoMath.min(...)/ target selection even when the wallet has no shielded keys. With the default-1, both policies restart from index0and re-fetch the whole prefix instead of only the missing suffix.🛠️ Suggested direction
- const lastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex); + const hasShieldedKeys = !!(await storage.getAccessData())?.spendXpubkey; + const effectiveShieldedLastLoaded = hasShieldedKeys + ? shieldedLastLoadedAddressIndex + : lastLoadedAddressIndex; + const lastLoaded = Math.min(lastLoadedAddressIndex, effectiveShieldedLastLoaded);Use the same guard for
shieldedTarget/minLastLoadedincheckGapLimit().Also applies to: 378-399
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/storage.ts` around lines 328 - 347, The code currently uses Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex) even when shielded derivation is disabled (shieldedLastLoadedAddressIndex is -1), causing unnecessary re-fetches; update the logic where lastLoaded/minLastLoaded and shieldedTarget are computed to ignore shieldedLastLoadedAddressIndex when it's negative (treat <0 as "no shielded keys") — e.g., in the block using lastLoadedAddressIndex and shieldedLastLoadedAddressIndex replace the Math.min call with a guarded computation that uses shieldedLastLoadedAddressIndex only if shieldedLastLoadedAddressIndex >= 0, and apply the same guard in checkGapLimit() where shieldedTarget/minLastLoaded are calculated so shielded cursors are skipped when shielded derivation is disabled.src/types.ts (2)
657-660:⚠️ Potential issue | 🟠 MajorAllow clearing cached PIN via the interface.
Line 658 declares
pinCode?: string, but Line 659 requires a string argument. This prevents explicit in-memory secret clearing through the API.Minimal fix
- setPinCode(pinCode: string): void; + setPinCode(pinCode?: string): void;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 657 - 660, The interface currently defines pinCode?: string but setPinCode(pinCode: string): void requires a string, preventing callers from clearing the cached PIN; change the setPinCode signature to accept an optional/nullable value (e.g. setPinCode(pinCode?: string | null): void) and update any implementations of setPinCode and callers to handle undefined/null as “clear the in-memory PIN” while preserving existing behavior when a string is passed; update types in src/types.ts for the pinCode and setPinCode symbols accordingly.
720-724:⚠️ Potential issue | 🟠 Major
getSpendXPrivKeyreturn type should model old-wallet absence.
spendMainKeyis optional inIWalletAccessData(Line 446), so Line 722 should not guaranteePromise<string>.Minimal fix
- getSpendXPrivKey(pinCode: string): Promise<string>; + getSpendXPrivKey(pinCode: string): Promise<string | undefined>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 720 - 724, The TypeScript interface declares getSpendXPrivKey(pinCode: string): Promise<string> but IWalletAccessData marks spendMainKey as optional, so update the getSpendXPrivKey signature to return Promise<string | undefined> to model absence of the old-wallet key; locate and change the method declaration for getSpendXPrivKey in the interface in src/types.ts (and adjust any dependent types/usages to handle undefined) to keep signatures consistent with IWalletAccessData.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/storage.ts`:
- Around line 840-847: processShieldedOutputs(...) returns an object containing
an outputs array, but the code currently iterates the wrapper (shieldedResults)
which causes a runtime error; change the handling after calling
processShieldedOutputs in src/utils/storage.ts (the variable shieldedResults) to
access and iterate the outputs array (e.g., shieldedResults.outputs or
destructure { outputs }) and then loop over those decrypted outputs to record
shielded UTXOs and balances instead of iterating shieldedResults directly.
- Around line 752-760: The code checks addressInfo.addressType for
'shielded-spend' but deriveShieldedAddressFromStorage() persists the
discriminator as 'shielded_spend', causing shielded addresses to be
misclassified and shieldedMaxIndexUsed to never advance; update the comparisons
that test addressInfo.addressType (the occurrences around the block that updates
shieldedMaxIndexUsed and legacyMaxIndexUsed and the other two occurrences
mentioned) to use 'shielded_spend' instead of 'shielded-spend' so shielded
addresses are recognized consistently; verify the three locations (the branch
updating shieldedMaxIndexUsed, and the two similar branches around lines
~891-899 and ~920-928) are changed to the underscore form.
---
Outside diff comments:
In `@src/new/sendTransaction.ts`:
- Around line 314-330: The current validation uses the transparent-only fee
variable (`fee`) to decide if HTR-paying inputs are required (`requiresFees`),
but shielded output fees (computed via `shieldedOutputDefs` into `shieldedFee`)
are added afterwards to produce `totalFee`; update the guard that throws
`SendTxError(ErrorMessages.INVALID_INPUT)` to consider the combined `totalFee`
instead of `fee` (i.e., compute `shieldedFee` first or compute `totalFee` before
checking `requiresFees`) so that a nonzero `totalFee` prevents the invalid-input
error when `requiresFees.length > 0`; keep the `err.errorData = requiresFees`
behavior and the `SendTxError`/`ErrorMessages.INVALID_INPUT` usage.
In `@src/types.ts`:
- Around line 277-297: The current IHistoryOutput mixes required
transparent-only fields with shielded entries; change it to a discriminated
union to make the types sound: define a common base (e.g., HistoryOutputBase)
for shared optional fields, then create ITransparentOutput (no "type" field or
type?: undefined) with required fields value, token_data, script, decoded,
token, spent_by, and create IShieldedOutputEntry with type: 'shielded' and the
shielded-specific fields (commitment, range_proof, ephemeral_pubkey,
asset_commitment, surjection_proof) and omit the transparent-only fields;
finally set IHistoryOutput = ITransparentOutput | IShieldedOutputEntry and
ensure callers narrow on the discriminant (type === 'shielded') before accessing
shielded-only fields or assume transparent fields otherwise.
---
Duplicate comments:
In `@src/headers/shielded_outputs.ts`:
- Around line 53-109: The deserialize function is vulnerable to out-of-bounds
reads and silent desyncs; before every fixed-size read (33-byte commitments,
ephemeralPubkey, assetCommitment) and before consuming variable-length fields
(after unpackToInt for rpLen, scriptLen, spLen) assert buf.length >=
requiredSize and throw an Error if not, and explicitly validate the mode read
from unpackToInt against ShieldedOutputMode (handle AMOUNT_SHIELDED and
FULLY_SHIELDED, otherwise throw) so unsupported modes do not fall through;
update logic in deserialize, referencing getVertexHeaderIdFromBuffer,
unpackToInt, ShieldedOutputMode, and the local variables commitment, rangeProof,
script, assetCommitment, surjectionProof, ephemeralPubkey to perform these
length checks and fail fast on malformed input.
In `@src/new/wallet.ts`:
- Around line 1477-1502: The current code compacts newTx.outputs into
transparentOutputs, which shifts array indexes and breaks UTXO indexing;
instead, keep newTx.outputs intact and only extract shielded entries into
newTx.shielded_outputs without reassigning newTx.outputs. Locate the block using
newTx, shielded_outputs, outputs and transactionUtils.isShieldedOutputEntry and
change the logic so you push shielded entries into shieldedEntries but do not
replace newTx.outputs (or if you must remove shielded entries, replace them with
a stable placeholder object that preserves the original array length/indexes)
and then set newTx.shielded_outputs = shieldedEntries when non-empty.
In `@src/types.ts`:
- Around line 657-660: The interface currently defines pinCode?: string but
setPinCode(pinCode: string): void requires a string, preventing callers from
clearing the cached PIN; change the setPinCode signature to accept an
optional/nullable value (e.g. setPinCode(pinCode?: string | null): void) and
update any implementations of setPinCode and callers to handle undefined/null as
“clear the in-memory PIN” while preserving existing behavior when a string is
passed; update types in src/types.ts for the pinCode and setPinCode symbols
accordingly.
- Around line 720-724: The TypeScript interface declares
getSpendXPrivKey(pinCode: string): Promise<string> but IWalletAccessData marks
spendMainKey as optional, so update the getSpendXPrivKey signature to return
Promise<string | undefined> to model absence of the old-wallet key; locate and
change the method declaration for getSpendXPrivKey in the interface in
src/types.ts (and adjust any dependent types/usages to handle undefined) to keep
signatures consistent with IWalletAccessData.
In `@src/utils/storage.ts`:
- Around line 328-347: The code currently uses Math.min(lastLoadedAddressIndex,
shieldedLastLoadedAddressIndex) even when shielded derivation is disabled
(shieldedLastLoadedAddressIndex is -1), causing unnecessary re-fetches; update
the logic where lastLoaded/minLastLoaded and shieldedTarget are computed to
ignore shieldedLastLoadedAddressIndex when it's negative (treat <0 as "no
shielded keys") — e.g., in the block using lastLoadedAddressIndex and
shieldedLastLoadedAddressIndex replace the Math.min call with a guarded
computation that uses shieldedLastLoadedAddressIndex only if
shieldedLastLoadedAddressIndex >= 0, and apply the same guard in checkGapLimit()
where shieldedTarget/minLastLoaded are calculated so shielded cursors are
skipped when shielded derivation is disabled.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9ff1cdb9-b037-47d6-a154-6121c002c565
📒 Files selected for processing (20)
__tests__/integration/hathorwallet_facade.test.ts__tests__/integration/shielded_transactions.test.ts__tests__/models/network.test.ts__tests__/models/shielded_output.test.tssrc/constants.tssrc/headers/shielded_outputs.tssrc/models/shielded_output.tssrc/models/transaction.tssrc/new/sendTransaction.tssrc/new/wallet.tssrc/schemas.tssrc/shielded/creation.tssrc/shielded/processing.tssrc/storage/memory_store.tssrc/storage/storage.tssrc/types.tssrc/utils/address.tssrc/utils/bigint.tssrc/utils/shieldedAddress.tssrc/utils/storage.ts
✅ Files skipped from review due to trivial changes (2)
- src/models/shielded_output.ts
- tests/models/shielded_output.test.ts
🚧 Files skipped from review as they are similar to previous changes (10)
- tests/models/network.test.ts
- src/schemas.ts
- tests/integration/hathorwallet_facade.test.ts
- src/utils/bigint.ts
- src/shielded/processing.ts
- src/utils/address.ts
- src/storage/memory_store.ts
- tests/integration/shielded_transactions.test.ts
- src/storage/storage.ts
- src/utils/shieldedAddress.ts
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (10)
src/types.ts (2)
737-741:⚠️ Potential issue | 🟠 MajorReturn type mismatch for
getSpendXPrivKey.The
spendMainKeyfield is optional inIWalletAccessData, meaning older wallets won't have it. However,getSpendXPrivKey()returnsPromise<string>rather thanPromise<string | undefined>, which doesn't encode this possibility. The public key accessors (getScanXPubKey,getSpendXPubKey) correctly returnPromise<string | undefined>.- getSpendXPrivKey(pinCode: string): Promise<string>; + getSpendXPrivKey(pinCode: string): Promise<string | undefined>;Also apply the same fix to
getScanXPrivKeyfor consistency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 737 - 741, The return types for the shielded private-key accessors are incorrect: change getScanXPrivKey(pinCode: string) and getSpendXPrivKey(pinCode: string) to return Promise<string | undefined> (matching getScanXPubKey/getSpendXPubKey and the optional spendMainKey in IWalletAccessData) so callers can handle missing keys; update the method signatures in src/types.ts accordingly (keep the parameter names the same).
674-676:⚠️ Potential issue | 🟠 Major
setPinCodeshould accept optional parameter to allow clearing.The
pinCodeproperty is optional (pinCode?: string), butsetPinCode(pinCode: string)requires a value, providing no interface-level way to clear the cached PIN. This unnecessarily extends the secret's lifetime in memory.- setPinCode(pinCode: string): void; + setPinCode(pinCode?: string): void;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 674 - 676, The setter signature for pinCode is too strict: the interface has pinCode?: string but setPinCode(pinCode: string) requires a non-optional value, preventing callers from clearing the PIN; change setPinCode to accept an optional value (e.g., pinCode?: string or pinCode: string | undefined) on the interface/type that declares pinCode and setPinCode (referencing pinCode and setPinCode in src/types.ts), and then update any implementing classes/functions and their callers to pass undefined or omit the argument when clearing the cached PIN.src/utils/transaction.ts (1)
668-696:⚠️ Potential issue | 🟠 MajorMissing validation for
FULLY_SHIELDED-specific fields.The validation only checks common fields (
commitment,rangeProof,script,ephemeralPubkey). AFULLY_SHIELDEDentry missingassetCommitmentorsurjectionProofwill pass validation here but fail during serialization.Add mode-specific validation:
if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) { throw new Error('Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)'); } + if ( + so.mode === ShieldedOutputMode.FULLY_SHIELDED && + (!so.assetCommitment || !so.surjectionProof) + ) { + throw new Error( + 'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)' + ); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/transaction.ts` around lines 668 - 696, The current validation in the txData.shieldedOutputs mapping only enforces common crypto fields and misses mode-specific requirements; update the validation inside the txData.shieldedOutputs.map callback (where ShieldedOutput is constructed) to additionally check when so.mode === 'FULLY_SHIELDED' that so.assetCommitment and so.surjectionProof are present (non-null/defined) and throw a clear Error if they are missing so construction/serialization won't fail later.src/shielded/provider.ts (2)
29-35:⚠️ Potential issue | 🟠 MajorMissing validation for negative bigint values.
The range check only rejects values above
MAX_SAFE_INTEGERbut allows negative values through. Since these represent output amounts, negative values should also be rejected to prevent unexpected behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shielded/provider.ts` around lines 29 - 35, The createAmountShieldedOutput function currently only checks for values above Number.MAX_SAFE_INTEGER but allows negative BigInt inputs; add a validation to reject value < 0n and throw a clear Error (e.g., "Shielded output value must be non-negative") before calling ct.createAmountShieldedOutput. Update the guard around value (the parameter named value used in createAmountShieldedOutput and passed to ct.createAmountShieldedOutput) to enforce 0n <= value <= BigInt(Number.MAX_SAFE_INTEGER) and keep the existing error behavior for out-of-range values.
87-92:⚠️ Potential issue | 🟠 Major
toSafeNumberhelper also misses negative check.The helper function defined in
computeBalancingBlindingFactorhas the same issue - it should also reject negative values for inputs/outputs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shielded/provider.ts` around lines 87 - 92, The helper toSafeNumber currently only checks for values greater than Number.MAX_SAFE_INTEGER but misses negative values; update the toSafeNumber used in the shielded code (and the duplicate helper inside computeBalancingBlindingFactor) to also validate that v is >= 0 and throw a clear Error (e.g., 'Shielded output value out of safe range') when v is negative or exceeds MAX_SAFE_INTEGER so both conversion sites reject negative bigint inputs.src/utils/wallet.ts (2)
704-719:⚠️ Potential issue | 🟠 MajorSame multisig concern applies to
generateAccessDataFromSeed.This function unconditionally derives shielded spend keys even when
walletType === WalletType.MULTISIG. Apply the same guard as suggested forgenerateAccessDataFromXpriv.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/wallet.ts` around lines 704 - 719, In generateAccessDataFromSeed, avoid unconditionally deriving shielded spend keys when walletType === WalletType.MULTISIG: wrap the spendAcctXpriv/spendXpriv derivation and the use of spendXpubkey and spendMainKey (and encryptData(spendXpriv.xprivkey, pin)) behind a guard that skips derivation for multisig wallets and instead returns null/undefined (or an appropriate empty value) for spendXpubkey and spendMainKey; reference the existing symbols spendAcctXpriv, spendXpriv, spendXpubkey, spendMainKey, walletType and WalletType.MULTISIG to locate and change the logic.
623-631:⚠️ Potential issue | 🟠 MajorShielded keys are derived for all wallet types including multisig.
This code derives single-sig shielded keys (
m/44'/280'/1'/0) even forWalletType.MULTISIGwallets. This could break the multisig signing model if these keys are ever used, as one signer would control them alone.Consider guarding against multisig:
- if (argXpriv.depth === 0) { + if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/wallet.ts` around lines 623 - 631, The current block unconditionally derives shielded spend keys from argXpriv (using SHIELDED_SPEND_ACCT_PATH and storing into accessData.spendXpubkey / accessData.spendMainKey), which will produce single-sig shielded keys even for multisig wallets; update the logic to skip deriving/encrypting these keys when the wallet type is WalletType.MULTISIG (or any non-single-sig type) by checking the wallet type before calling deriveNonCompliantChild and encryptData, so multisig flows do not get private shielded keys created for a single signer.src/models/transaction.ts (1)
369-372:⚠️ Potential issue | 🟠 Major
shieldedOutputsis not populated during deserialization.The
getOutputsSum()method now includesthis.shieldedOutputs, butcreateFromBytes()andgetHeadersFromBytes()never rebuild this array from the serializedShieldedOutputsHeader. Transactions parsed from bytes will have an emptyshieldedOutputsarray, causing incorrect weight calculations and validation.Either populate
shieldedOutputsfrom the header during deserialization, or derive the sum directly from the parsed header.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/models/transaction.ts` around lines 369 - 372, createFromBytes() and getHeadersFromBytes() currently ignore the parsed ShieldedOutputsHeader so this.shieldedOutputs stays empty; update those deserializers to rebuild/populate the shieldedOutputs array from the parsed ShieldedOutputsHeader (or at minimum compute and set the shielded outputs sum) so getOutputsSum() sees the real shielded values. Specifically, in getHeadersFromBytes()/createFromBytes(), locate where ShieldedOutputsHeader is parsed and either (a) iterate the header entries to create shielded output objects (with .value populated) and assign them to this.shieldedOutputs, or (b) derive and store the shielded sum from the header into a field used by getOutputsSum(); ensure the produced shielded output objects match the shape expected by getOutputsSum().src/new/wallet.ts (1)
864-872:⚠️ Potential issue | 🟠 MajorReturn a shielded-chain path for shielded current addresses.
When
opts?.legacy === false, this method can return a shielded address but still derivesaddressPaththroughgetAddressPathForIndex(), which only builds transparent P2PKH/P2SH paths. Any caller that displays or re-derives from the returned path gets the wrong chain.getNextAddress()inherits the same bug.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/new/wallet.ts` around lines 864 - 872, The current getCurrentAddress (and likewise getNextAddress) always computes addressPath via getAddressPathForIndex(), which only builds transparent P2PKH/P2SH paths and thus returns incorrect paths for shielded addresses when opts?.legacy === false; update both methods to detect shielded addresses (e.g., via opts?.legacy flag or the address format returned from storage.getCurrentAddress), and when a shielded address is returned call the appropriate shielded-path builder (e.g., getShieldedAddressPathForIndex or equivalent) or otherwise compute a shielded path instead of getAddressPathForIndex; also handle null/undefined index from getAddressIndex() gracefully and keep transparent flows unchanged.src/headers/shielded_outputs.ts (1)
67-109:⚠️ Potential issue | 🟠 MajorReject truncated and unsupported shielded entries during parsing.
This loop still slices fixed-size and length-prefixed fields without checking the remaining bytes, and it still accepts unknown
modevalues by falling through to the ephemeral-pubkey read. A short or invalid header can therefore build undersizedShieldedOutputs or misalign the buffer for the next entry instead of failing fast.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/headers/shielded_outputs.ts` around lines 67 - 109, The parsing loop for shielded outputs must reject truncated or unknown entries: before each fixed-size slice (commitment, assetCommitment, ephemeralPubkey) and before each length-prefixed read (rangeProof, script, surjectionProof) check that buf.length is >= the required bytes and that unpackToInt returned a sane length (non-negative and <= buf.length); if not, throw a parse error. Also validate the mode value read via unpackToInt against known ShieldedOutputMode values (ShieldedOutputMode.AMOUNT_SHIELDED and ShieldedOutputMode.FULLY_SHIELDED) and throw on unknown modes instead of falling through. Apply these checks around the unpackToInt calls and the subsequent Buffer.subarray uses (the loop iterating numOutputs, variables: mode, rpLen, scriptLen, spLen, commitment, rangeProof, script, assetCommitment, surjectionProof, ephemeralPubkey) so you fail fast on truncation or unsupported modes and avoid misaligning subsequent entries.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__tests__/integration/configuration/docker-compose.yml`:
- Around line 240-241: Update the hardcoded port numbers from 3000/3001 to
3100/3101 where the test helpers construct service URLs (the variables/constants
that build the localhost URLs in the service-facade helper, e.g., the SERVICE_*
or base URL variables used on lines that create "http://localhost:3000" and
"http://localhost:3001") and change the documentation comment in the adapters
types file that mentions the old ports to reflect 3100/3101; ensure any tests or
helper utilities that derive ports from these values use the updated ports and
run the integration tests to verify connectivity.
In `@src/models/shielded_output.ts`:
- Around line 100-105: The current serializers in the ShieldedOutput
serialization code silently replace a missing or wrong-sized ephemeralPubkey
with 33 zero bytes; instead, validate that this.ephemeralPubkey exists and its
length === EPHEMERAL_PUBKEY_SIZE (using the EPHEMERAL_PUBKEY_SIZE constant) and
fail fast (throw an Error or return a serialization error) if validation fails
rather than pushing Buffer.alloc(...). Apply this change to the two
serialization sites that reference ephemeralPubkey (the shown arr.push branch
and the similar block at the 130-135 location) so malformed keys are rejected
instead of being zero-padded.
In `@src/new/sendTransaction.ts`:
- Around line 369-395: The filtered-out phantom outputs removed the only
references to custom-token amounts for shielded outputs so fullTxData.tokens no
longer contains those token ids; fix by ensuring tokens in fullTxData include
both tokenMap keys and any token ids used by shieldedOutputs (created by
createShieldedOutputs) — e.g. after shieldedOutputs is built, merge its token
ids into the token set used for fullTxData.tokens (or update tokenMap
accordingly) so the IDataTx produced in this.fullTxData contains the
custom-token identifiers referenced by shieldedOutputs and avoids melt/authority
mismatches.
In `@src/storage/storage.ts`:
- Around line 186-192: Storage currently only exposes setPinCode(pinCode:
string) and retains plaintext PIN; add a way to clear the cached PIN and ensure
teardown clears it: modify the Storage class by either (a) adding a
clearPinCode() method that nulls/undefines this.pinCode, or (b) change
setPinCode to accept null/undefined to clear the field; then call the new clear
method (or setPinCode(undefined)) from handleStop to remove the plaintext PIN.
Update any callers/tests that assume only a setter so the PIN is reliably
cleared on shutdown/rebind.
---
Duplicate comments:
In `@src/headers/shielded_outputs.ts`:
- Around line 67-109: The parsing loop for shielded outputs must reject
truncated or unknown entries: before each fixed-size slice (commitment,
assetCommitment, ephemeralPubkey) and before each length-prefixed read
(rangeProof, script, surjectionProof) check that buf.length is >= the required
bytes and that unpackToInt returned a sane length (non-negative and <=
buf.length); if not, throw a parse error. Also validate the mode value read via
unpackToInt against known ShieldedOutputMode values
(ShieldedOutputMode.AMOUNT_SHIELDED and ShieldedOutputMode.FULLY_SHIELDED) and
throw on unknown modes instead of falling through. Apply these checks around the
unpackToInt calls and the subsequent Buffer.subarray uses (the loop iterating
numOutputs, variables: mode, rpLen, scriptLen, spLen, commitment, rangeProof,
script, assetCommitment, surjectionProof, ephemeralPubkey) so you fail fast on
truncation or unsupported modes and avoid misaligning subsequent entries.
In `@src/models/transaction.ts`:
- Around line 369-372: createFromBytes() and getHeadersFromBytes() currently
ignore the parsed ShieldedOutputsHeader so this.shieldedOutputs stays empty;
update those deserializers to rebuild/populate the shieldedOutputs array from
the parsed ShieldedOutputsHeader (or at minimum compute and set the shielded
outputs sum) so getOutputsSum() sees the real shielded values. Specifically, in
getHeadersFromBytes()/createFromBytes(), locate where ShieldedOutputsHeader is
parsed and either (a) iterate the header entries to create shielded output
objects (with .value populated) and assign them to this.shieldedOutputs, or (b)
derive and store the shielded sum from the header into a field used by
getOutputsSum(); ensure the produced shielded output objects match the shape
expected by getOutputsSum().
In `@src/new/wallet.ts`:
- Around line 864-872: The current getCurrentAddress (and likewise
getNextAddress) always computes addressPath via getAddressPathForIndex(), which
only builds transparent P2PKH/P2SH paths and thus returns incorrect paths for
shielded addresses when opts?.legacy === false; update both methods to detect
shielded addresses (e.g., via opts?.legacy flag or the address format returned
from storage.getCurrentAddress), and when a shielded address is returned call
the appropriate shielded-path builder (e.g., getShieldedAddressPathForIndex or
equivalent) or otherwise compute a shielded path instead of
getAddressPathForIndex; also handle null/undefined index from getAddressIndex()
gracefully and keep transparent flows unchanged.
In `@src/shielded/provider.ts`:
- Around line 29-35: The createAmountShieldedOutput function currently only
checks for values above Number.MAX_SAFE_INTEGER but allows negative BigInt
inputs; add a validation to reject value < 0n and throw a clear Error (e.g.,
"Shielded output value must be non-negative") before calling
ct.createAmountShieldedOutput. Update the guard around value (the parameter
named value used in createAmountShieldedOutput and passed to
ct.createAmountShieldedOutput) to enforce 0n <= value <=
BigInt(Number.MAX_SAFE_INTEGER) and keep the existing error behavior for
out-of-range values.
- Around line 87-92: The helper toSafeNumber currently only checks for values
greater than Number.MAX_SAFE_INTEGER but misses negative values; update the
toSafeNumber used in the shielded code (and the duplicate helper inside
computeBalancingBlindingFactor) to also validate that v is >= 0 and throw a
clear Error (e.g., 'Shielded output value out of safe range') when v is negative
or exceeds MAX_SAFE_INTEGER so both conversion sites reject negative bigint
inputs.
In `@src/types.ts`:
- Around line 737-741: The return types for the shielded private-key accessors
are incorrect: change getScanXPrivKey(pinCode: string) and
getSpendXPrivKey(pinCode: string) to return Promise<string | undefined>
(matching getScanXPubKey/getSpendXPubKey and the optional spendMainKey in
IWalletAccessData) so callers can handle missing keys; update the method
signatures in src/types.ts accordingly (keep the parameter names the same).
- Around line 674-676: The setter signature for pinCode is too strict: the
interface has pinCode?: string but setPinCode(pinCode: string) requires a
non-optional value, preventing callers from clearing the PIN; change setPinCode
to accept an optional value (e.g., pinCode?: string or pinCode: string |
undefined) on the interface/type that declares pinCode and setPinCode
(referencing pinCode and setPinCode in src/types.ts), and then update any
implementing classes/functions and their callers to pass undefined or omit the
argument when clearing the cached PIN.
In `@src/utils/transaction.ts`:
- Around line 668-696: The current validation in the txData.shieldedOutputs
mapping only enforces common crypto fields and misses mode-specific
requirements; update the validation inside the txData.shieldedOutputs.map
callback (where ShieldedOutput is constructed) to additionally check when
so.mode === 'FULLY_SHIELDED' that so.assetCommitment and so.surjectionProof are
present (non-null/defined) and throw a clear Error if they are missing so
construction/serialization won't fail later.
In `@src/utils/wallet.ts`:
- Around line 704-719: In generateAccessDataFromSeed, avoid unconditionally
deriving shielded spend keys when walletType === WalletType.MULTISIG: wrap the
spendAcctXpriv/spendXpriv derivation and the use of spendXpubkey and
spendMainKey (and encryptData(spendXpriv.xprivkey, pin)) behind a guard that
skips derivation for multisig wallets and instead returns null/undefined (or an
appropriate empty value) for spendXpubkey and spendMainKey; reference the
existing symbols spendAcctXpriv, spendXpriv, spendXpubkey, spendMainKey,
walletType and WalletType.MULTISIG to locate and change the logic.
- Around line 623-631: The current block unconditionally derives shielded spend
keys from argXpriv (using SHIELDED_SPEND_ACCT_PATH and storing into
accessData.spendXpubkey / accessData.spendMainKey), which will produce
single-sig shielded keys even for multisig wallets; update the logic to skip
deriving/encrypting these keys when the wallet type is WalletType.MULTISIG (or
any non-single-sig type) by checking the wallet type before calling
deriveNonCompliantChild and encryptData, so multisig flows do not get private
shielded keys created for a single signer.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8882e195-138b-4dec-83a6-1d7c58d503a7
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (44)
__tests__/headers/shielded_outputs.test.ts__tests__/integration/configuration/docker-compose.yml__tests__/integration/configuration/privnet.yml__tests__/integration/hathorwallet_facade.test.ts__tests__/integration/shielded_transactions.test.ts__tests__/models/address.test.ts__tests__/models/network.test.ts__tests__/models/shielded_output.test.ts__tests__/shielded/processing.test.ts__tests__/storage/storage.test.ts__tests__/wallet/walletServiceStorageProxy.test.tspackage.jsonsetupTests-integration.jssrc/constants.tssrc/headers/parser.tssrc/headers/shielded_outputs.tssrc/headers/types.tssrc/lib.tssrc/models/address.tssrc/models/network.tssrc/models/shielded_output.tssrc/models/transaction.tssrc/new/sendTransaction.tssrc/new/types.tssrc/new/wallet.tssrc/schemas.tssrc/shielded/creation.tssrc/shielded/index.tssrc/shielded/processing.tssrc/shielded/provider.tssrc/shielded/types.tssrc/storage/memory_store.tssrc/storage/storage.tssrc/sync/stream.tssrc/types.tssrc/utils/address.tssrc/utils/bigint.tssrc/utils/shieldedAddress.tssrc/utils/storage.tssrc/utils/transaction.tssrc/utils/wallet.tssrc/wallet/types.tssrc/wallet/wallet.tssrc/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
- setupTests-integration.js
✅ Files skipped from review due to trivial changes (8)
- package.json
- tests/integration/hathorwallet_facade.test.ts
- tests/wallet/walletServiceStorageProxy.test.ts
- tests/integration/configuration/privnet.yml
- src/shielded/index.ts
- tests/shielded/processing.test.ts
- tests/models/address.test.ts
- src/wallet/wallet.ts
🚧 Files skipped from review as they are similar to previous changes (16)
- src/headers/types.ts
- src/headers/parser.ts
- src/lib.ts
- src/wallet/walletServiceStorageProxy.ts
- tests/models/network.test.ts
- src/utils/bigint.ts
- src/sync/stream.ts
- src/models/network.ts
- tests/headers/shielded_outputs.test.ts
- src/schemas.ts
- tests/storage/storage.test.ts
- src/wallet/types.ts
- src/new/types.ts
- src/shielded/processing.ts
- tests/models/shielded_output.test.ts
- src/storage/memory_store.ts
| /** | ||
| * Set the pin code for shielded output decryption during tx processing. | ||
| * @param pinCode The pin code | ||
| */ | ||
| setPinCode(pinCode: string): void { | ||
| this.pinCode = pinCode; | ||
| } |
There was a problem hiding this comment.
Provide a way to clear the cached PIN.
Storage now keeps the plaintext PIN on a long-lived object, but this API only ever sets it. Because handleStop() does not clear the field, a reused Storage instance can retain a usable PIN across wallet shutdown or rebinding.
Suggested direction
- setPinCode(pinCode: string): void {
+ setPinCode(pinCode?: string): void {
this.pinCode = pinCode;
}Then call setPinCode() during teardown to clear it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/storage/storage.ts` around lines 186 - 192, Storage currently only
exposes setPinCode(pinCode: string) and retains plaintext PIN; add a way to
clear the cached PIN and ensure teardown clears it: modify the Storage class by
either (a) adding a clearPinCode() method that nulls/undefines this.pinCode, or
(b) change setPinCode to accept null/undefined to clear the field; then call the
new clear method (or setPinCode(undefined)) from handleStop to remove the
plaintext PIN. Update any callers/tests that assume only a setter so the PIN is
reliably cleared on shutdown/rebind.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/utils/storage.ts (1)
376-421:⚠️ Potential issue | 🟡 MinorConsider whether
shieldedLastLoadedAddressIndexshould be synchronized withlastLoadedAddressIndexwhen shielded keys are unavailable.The
minLastLoadedcalculation correctly gates theshieldedNeedMorecheck onhasShieldedKeys, but when no shielded keys exist,shieldedLastLoadedAddressIndexremains at 0. This causesminLastLoaded = 0even iflastLoadedAddressIndexis 19+, resulting innextIndex = 1. WhileloadAddressessafely deduplicates by reusing existing addresses, this forces unnecessary re-validation of already-generated addresses. If a wallet lacks shielded keys from the start, consider initializingshieldedLastLoadedAddressIndexto matchlastLoadedAddressIndexto avoid redundant checks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/storage.ts` around lines 376 - 421, The checkGapLimit function can force redundant revalidation when shielded keys are absent because shieldedLastLoadedAddressIndex stays at 0; update the logic in checkGapLimit to treat shieldedLastLoadedAddressIndex (and shieldedLastUsedAddressIndex) as equal to lastLoadedAddressIndex (and lastUsedAddressIndex) when hasShieldedKeys is false: detect hasShieldedKeys from storage.getAccessData(), and if false, assign shieldedLastLoadedAddressIndex = lastLoadedAddressIndex and shieldedLastUsedAddressIndex = lastUsedAddressIndex before computing legacyNeedMore/shieldedNeedMore and minLastLoaded so minLastLoaded reflects the actual legacy progress and avoids unnecessary reloads.
♻️ Duplicate comments (6)
src/headers/shielded_outputs.ts (2)
93-106:⚠️ Potential issue | 🟠 MajorFail on unknown shielded output modes.
If
modeis neitherAMOUNT_SHIELDEDnorFULLY_SHIELDED, this parser still consumes an ephemeral pubkey and constructs aShieldedOutput. That desynchronizes the remaining buffer for the next entry instead of rejecting the header immediately.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/headers/shielded_outputs.ts` around lines 93 - 106, The parser currently only handles ShieldedOutputMode.AMOUNT_SHIELDED and FULLY_SHIELDED and silently proceeds for other values, causing it to consume ephemeralPubkey and desynchronize the buffer; update the logic around the mode switch (the code handling mode, tokenData, assetCommitment, surjectionProof, and ephemeralPubkey) to explicitly reject unknown modes by throwing an error (or returning a failure) as soon as mode is not one of ShieldedOutputMode.AMOUNT_SHIELDED or ShieldedOutputMode.FULLY_SHIELDED, ensuring no buffer consumption occurs and preventing construction of a ShieldedOutput for invalid modes.
53-125:⚠️ Potential issue | 🟠 MajorReject truncated shielded headers during parsing.
All fixed-size and length-prefixed reads here slice
bufwithout checking that enough bytes remain first. A truncated payload can therefore produce short commitments/proofs/pubkeys instead of failing fast, which makes malformed wire data look valid.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/headers/shielded_outputs.ts` around lines 53 - 125, The deserialize method in static deserialize (shielded_outputs.ts) reads fixed-size fields and length-prefixed blobs without verifying remaining buffer length, so truncated inputs can produce short slices instead of errors; update static deserialize to validate buf.length before every fixed-size read (33-byte commitments/ephemeralPubkey, 1/2-byte header fields) and before each variable-length read returned by unpackToInt (rpLen, scriptLen, spLen) and throw a clear Error if insufficient bytes remain; ensure you also validate numOutputs and any unpacked lengths are non-negative and within remaining buffer bounds before slicing so ShieldedOutput construction only receives complete buffers.src/utils/transaction.ts (1)
668-697:⚠️ Potential issue | 🟠 MajorValidate
FULLY_SHIELDED-specific fields before building the model.This block only checks the shared crypto fields. A
FULLY_SHIELDEDentry with missingassetCommitmentorsurjectionProofstill becomes aShieldedOutputand only blows up later when it is serialized or signed.🛠️ Suggested fix
const shieldedModels = txData.shieldedOutputs.map(so => { if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) { throw new Error( 'Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)' ); } + if ( + so.mode === ShieldedOutputMode.FULLY_SHIELDED && + (!so.assetCommitment || !so.surjectionProof) + ) { + throw new Error( + 'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)' + ); + } const tokenData = this.getTokenDataFromOutput( { type: 'p2pkh',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/transaction.ts` around lines 668 - 697, The shielded output constructor currently only checks shared crypto fields; add a mode-specific validation so that for any txData.shieldedOutputs entry where so.mode === 'FULLY_SHIELDED' you also verify presence (and non-empty values) of so.assetCommitment and so.surjectionProof before creating a new ShieldedOutput; if either is missing throw a clear Error referencing FULLY_SHIELDED, assetCommitment, and surjectionProof to fail fast (update the validation block around txData.shieldedOutputs and ShieldedOutput construction).src/models/shielded_output.ts (1)
101-106:⚠️ Potential issue | 🟠 MajorDo not serialize an invalid ephemeral pubkey as zeros.
Zero-padding here turns a missing or wrong-sized ephemeral key into a different payload/sighash instead of surfacing the bug. Please fail fast when
ephemeralPubkey.length !== EPHEMERAL_PUBKEY_SIZE.Also applies to: 131-136
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/models/shielded_output.ts` around lines 101 - 106, The serializer currently silently zero-pads an invalid ephemeralPubkey which hides bugs; update the serialization logic around ephemeralPubkey and EPHEMERAL_PUBKEY_SIZE so that if this.ephemeralPubkey is null/undefined you still push Buffer.alloc(EPHEMERAL_PUBKEY_SIZE) for backward-compat, but if this.ephemeralPubkey is present and this.ephemeralPubkey.length !== EPHEMERAL_PUBKEY_SIZE throw a descriptive Error (or assert) instead of zero-padding; make the same change in the second occurrence handling the ephemeralPubkey (lines 131-136 equivalent) so both code paths fail fast on wrong-sized keys.src/new/wallet.ts (2)
1526-1551:⚠️ Potential issue | 🔴 CriticalDo not compact
newTx.outputs; it breaks output indexes.Replacing
newTx.outputswith only transparent entries renumbers every output after the first shielded one. The rest of the wallet still uses array position as the on-chain output index, so spends and UTXO bookkeeping will point at the wrong output on shielded transactions.🛠️ Suggested fix
if (!newTx.shielded_outputs) { const shieldedEntries: IHistoryShieldedOutput[] = []; - const transparentOutputs: IHistoryOutput[] = []; for (const output of newTx.outputs) { if (transactionUtils.isShieldedOutputEntry(output)) { shieldedEntries.push({ mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1 commitment: output.commitment!, // already hex @@ surjection_proof: output.surjection_proof ? Buffer.from(output.surjection_proof, 'base64').toString('hex') : undefined, }); - } else { - transparentOutputs.push(output); } } if (shieldedEntries.length > 0) { newTx.shielded_outputs = shieldedEntries; - newTx.outputs = transparentOutputs; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/new/wallet.ts` around lines 1526 - 1551, The current code compacts newTx.outputs removing shielded entries which shifts on-chain output indexes; instead, build shielded_entries by iterating newTx.outputs via transactionUtils.isShieldedOutputEntry (as is done) and assign newTx.shielded_outputs = shieldedEntries but do NOT reassign or filter newTx.outputs (leave the original outputs array intact so array indices remain stable); if you need to mark shielded items for later logic, add a non-index-affecting marker (e.g., a boolean property) to the output objects rather than removing them.
875-899:⚠️ Potential issue | 🟠 MajorReturn a chain-correct
addressPathfor shielded addresses.When
opts?.legacy === false, these methods can return a shielded address, butaddressPathstill comes fromgetAddressPathForIndex(), which always builds the legacy P2PKH/P2SH path. Any caller that uses the returned path to display or derive the address will get the wrong chain.
🧹 Nitpick comments (1)
src/shielded/processing.ts (1)
77-104: Consider adding debug logging when outputs are silently skipped.Outputs are silently skipped when:
addressis missing (line 91)addressInfois null (line 95)privkeyderivation fails (line 104)While the last case logs at warn level, the first two cases might benefit from debug-level logging to aid troubleshooting when shielded outputs unexpectedly don't appear in wallet balances.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shielded/processing.ts` around lines 77 - 104, In processShieldedOutputs, add debug-level logging where outputs are currently silently skipped: when address is falsy after shieldedOutput.decoded?.address (log shielded output index idx and any decoded info), and when storage.getAddressInfo(address) returns null (log address and idx); keep the existing warn for failed privkey derivation (privkey) but ensure its message includes address, idx and tx identifier for context. Use the storage.logger (or existing logger) and the same surrounding variables (shieldedOutput, idx, address, addressInfo, privkey, transparentCount) so skipped cases are traceable without changing control flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__tests__/shielded/creation.test.ts`:
- Around line 1-145: The test file containing the
describe("createShieldedOutputs") suite (which defines makeMockProvider,
makeDef, TEST_ADDRESS, and imports createShieldedOutputs) is not
Prettier-formatted; run the project's Prettier formatter (e.g. npm/yarn script
or npx prettier --write) on that test file, commit the resulting formatting
changes, and ensure the linter job passes before merging.
In `@src/shielded/processing.ts`:
- Around line 118-147: The code uses a non-null assertion on
shieldedOutput.asset_commitment when isFullShielded is true which can crash on
malformed responses; before calling
Buffer.from(shieldedOutput.asset_commitment!, 'hex') (and before invoking
cryptoProvider.rewindFullShieldedOutput), explicitly validate that
shieldedOutput.asset_commitment exists and is a non-empty string, and if missing
log a clear warning via storage.logger.warn (or throw a descriptive Error) and
skip processing that output (continue) so subsequent calls to
rewindFullShieldedOutput, deriveTag, and createAssetCommitment are never
executed with undefined data.
In `@src/wallet/types.ts`:
- Around line 363-371: Update the IHathorWallet type so getNextAddress mirrors
the implementation by adding the optional opts parameter: change the
getNextAddress signature to getNextAddress(opts?: IAddressChainOptions):
AddressInfoObject | Promise<AddressInfoObject>; so callers can pass
shielded-chain progression options (refer to IAddressChainOptions and
getNextAddress in src/new/wallet.ts for parity).
---
Outside diff comments:
In `@src/utils/storage.ts`:
- Around line 376-421: The checkGapLimit function can force redundant
revalidation when shielded keys are absent because
shieldedLastLoadedAddressIndex stays at 0; update the logic in checkGapLimit to
treat shieldedLastLoadedAddressIndex (and shieldedLastUsedAddressIndex) as equal
to lastLoadedAddressIndex (and lastUsedAddressIndex) when hasShieldedKeys is
false: detect hasShieldedKeys from storage.getAccessData(), and if false, assign
shieldedLastLoadedAddressIndex = lastLoadedAddressIndex and
shieldedLastUsedAddressIndex = lastUsedAddressIndex before computing
legacyNeedMore/shieldedNeedMore and minLastLoaded so minLastLoaded reflects the
actual legacy progress and avoids unnecessary reloads.
---
Duplicate comments:
In `@src/headers/shielded_outputs.ts`:
- Around line 93-106: The parser currently only handles
ShieldedOutputMode.AMOUNT_SHIELDED and FULLY_SHIELDED and silently proceeds for
other values, causing it to consume ephemeralPubkey and desynchronize the
buffer; update the logic around the mode switch (the code handling mode,
tokenData, assetCommitment, surjectionProof, and ephemeralPubkey) to explicitly
reject unknown modes by throwing an error (or returning a failure) as soon as
mode is not one of ShieldedOutputMode.AMOUNT_SHIELDED or
ShieldedOutputMode.FULLY_SHIELDED, ensuring no buffer consumption occurs and
preventing construction of a ShieldedOutput for invalid modes.
- Around line 53-125: The deserialize method in static deserialize
(shielded_outputs.ts) reads fixed-size fields and length-prefixed blobs without
verifying remaining buffer length, so truncated inputs can produce short slices
instead of errors; update static deserialize to validate buf.length before every
fixed-size read (33-byte commitments/ephemeralPubkey, 1/2-byte header fields)
and before each variable-length read returned by unpackToInt (rpLen, scriptLen,
spLen) and throw a clear Error if insufficient bytes remain; ensure you also
validate numOutputs and any unpacked lengths are non-negative and within
remaining buffer bounds before slicing so ShieldedOutput construction only
receives complete buffers.
In `@src/models/shielded_output.ts`:
- Around line 101-106: The serializer currently silently zero-pads an invalid
ephemeralPubkey which hides bugs; update the serialization logic around
ephemeralPubkey and EPHEMERAL_PUBKEY_SIZE so that if this.ephemeralPubkey is
null/undefined you still push Buffer.alloc(EPHEMERAL_PUBKEY_SIZE) for
backward-compat, but if this.ephemeralPubkey is present and
this.ephemeralPubkey.length !== EPHEMERAL_PUBKEY_SIZE throw a descriptive Error
(or assert) instead of zero-padding; make the same change in the second
occurrence handling the ephemeralPubkey (lines 131-136 equivalent) so both code
paths fail fast on wrong-sized keys.
In `@src/new/wallet.ts`:
- Around line 1526-1551: The current code compacts newTx.outputs removing
shielded entries which shifts on-chain output indexes; instead, build
shielded_entries by iterating newTx.outputs via
transactionUtils.isShieldedOutputEntry (as is done) and assign
newTx.shielded_outputs = shieldedEntries but do NOT reassign or filter
newTx.outputs (leave the original outputs array intact so array indices remain
stable); if you need to mark shielded items for later logic, add a
non-index-affecting marker (e.g., a boolean property) to the output objects
rather than removing them.
In `@src/utils/transaction.ts`:
- Around line 668-697: The shielded output constructor currently only checks
shared crypto fields; add a mode-specific validation so that for any
txData.shieldedOutputs entry where so.mode === 'FULLY_SHIELDED' you also verify
presence (and non-empty values) of so.assetCommitment and so.surjectionProof
before creating a new ShieldedOutput; if either is missing throw a clear Error
referencing FULLY_SHIELDED, assetCommitment, and surjectionProof to fail fast
(update the validation block around txData.shieldedOutputs and ShieldedOutput
construction).
---
Nitpick comments:
In `@src/shielded/processing.ts`:
- Around line 77-104: In processShieldedOutputs, add debug-level logging where
outputs are currently silently skipped: when address is falsy after
shieldedOutput.decoded?.address (log shielded output index idx and any decoded
info), and when storage.getAddressInfo(address) returns null (log address and
idx); keep the existing warn for failed privkey derivation (privkey) but ensure
its message includes address, idx and tx identifier for context. Use the
storage.logger (or existing logger) and the same surrounding variables
(shieldedOutput, idx, address, addressInfo, privkey, transparentCount) so
skipped cases are traceable without changing control flow.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 55e764f0-8be6-4684-b0e9-8c02093c1f2b
📒 Files selected for processing (27)
__tests__/headers/shielded_outputs.test.ts__tests__/integration/shielded_transactions.test.ts__tests__/models/network.test.ts__tests__/models/shielded_output.test.ts__tests__/models/transaction.test.ts__tests__/shielded/creation.test.ts__tests__/shielded/processing.test.tssrc/headers/shielded_outputs.tssrc/models/address.tssrc/models/network.tssrc/models/shielded_output.tssrc/new/sendTransaction.tssrc/new/wallet.tssrc/shielded/creation.tssrc/shielded/processing.tssrc/shielded/provider.tssrc/shielded/types.tssrc/storage/memory_store.tssrc/storage/storage.tssrc/sync/stream.tssrc/types.tssrc/utils/address.tssrc/utils/bigint.tssrc/utils/shieldedAddress.tssrc/utils/storage.tssrc/utils/transaction.tssrc/wallet/types.ts
✅ Files skipped from review due to trivial changes (2)
- tests/models/network.test.ts
- tests/models/shielded_output.test.ts
🚧 Files skipped from review as they are similar to previous changes (10)
- src/utils/bigint.ts
- src/sync/stream.ts
- src/models/network.ts
- tests/headers/shielded_outputs.test.ts
- src/models/address.ts
- src/utils/shieldedAddress.ts
- tests/shielded/processing.test.ts
- src/types.ts
- src/new/sendTransaction.ts
- src/shielded/types.ts
Part 1: Decoded shielded outputs are now merged into tx.outputs after decryption, allowing the existing transparent output loop to handle UTXO creation, balance updates, and metadata uniformly. This removes the separate shielded UTXO/metadata block and fixes a bug where processMetadataChanged skipped shielded outputs (spent_by updates were never processed for shielded UTXOs). Part 2: IHistoryOutput is now a discriminated union of ITransparentOutput and IShieldedOutputEntry. Both share value/token/decoded/spent_by fields. The discriminant is type: 'shielded'. isShieldedOutputEntry() is now a proper type guard. Removed all isShieldedOutputEntry skip patterns and unsafe (output as any).type casts across the codebase. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The surjection proof was created with only the output's token generator in the domain, but the fullnode verifies against ALL input generators. Fixed createShieldedOutputs to accept inputTokenUids and build the full domain from all transparent inputs. Also unskipped custom token integration tests now that hathor-core includes the to_json_extended tokens fix. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/utils/storage.ts (1)
342-363:⚠️ Potential issue | 🟠 MajorIgnore the shielded cursor in index-limit mode when the wallet has no shielded keys.
On legacy-only wallets,
shieldedLastLoadedAddressIndexstays at its default, soMath.min(...)pullsnextIndexback to1and re-fetches the prefix on every extension. This calculation needs the same key-availability gate used in the gap-limit path.Suggested fix
- const lastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex); + const hasShieldedKeys = !!(await storage.getAccessData())?.spendXpubkey; + const lastLoaded = hasShieldedKeys + ? Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex) + : lastLoadedAddressIndex;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/storage.ts` around lines 342 - 363, The Math.min-based calculation for nextIndex incorrectly uses shieldedLastLoadedAddressIndex even when the wallet has no shielded keys, causing nextIndex to jump back; update the logic in the index-limit branch (after storage.getWalletData and isIndexLimitScanPolicy) to only consider shieldedLastLoadedAddressIndex when the wallet actually has shielded keys (i.e., use the same key-availability gate used in the gap-limit path), otherwise use lastLoadedAddressIndex alone before comparing to limits.endIndex so nextIndex/count are computed from the correct cursor.
♻️ Duplicate comments (3)
src/shielded/processing.ts (1)
117-125:⚠️ Potential issue | 🟠 MajorValidate
asset_commitmentbefore rewinding FullShielded outputs.
IHistoryShieldedOutput.asset_commitmentis optional, so this still treats malformed fullnode data as a generic rewind failure and silently drops the output. Please warn andcontinueexplicitly before callingrewindFullShieldedOutput().Suggested fix
if (isFullShielded) { // FullShielded: rewind recovers token UID and asset blinding factor - const assetCommitment = Buffer.from(shieldedOutput.asset_commitment!, 'hex'); + if (!shieldedOutput.asset_commitment) { + storage.logger.warn( + 'FullShielded output missing asset_commitment', + tx.tx_id, + transparentCount + idx + ); + continue; + } + const assetCommitment = Buffer.from(shieldedOutput.asset_commitment, 'hex'); const result = await cryptoProvider.rewindFullShieldedOutput( privkey, ephPk,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shielded/processing.ts` around lines 117 - 125, The code calls cryptoProvider.rewindFullShieldedOutput for FullShielded outputs without validating shieldedOutput.asset_commitment; check that shieldedOutput.asset_commitment exists and is a valid hex string before doing Buffer.from(...) and before calling rewindFullShieldedOutput(privkey, ephPk, commitment, rangeProof, assetCommitment); if it's missing or malformed, emit a warning (e.g., using your logger) that includes identifying context and continue to the next output instead of attempting the rewind.src/storage/storage.ts (1)
186-192:⚠️ Potential issue | 🟠 MajorAllow the cached PIN to be cleared.
This setter only ever writes a plaintext PIN, so a reused
Storageinstance can retain a usable PIN after shutdown. Please make the argument optional and clear it fromhandleStop().Suggested direction
- setPinCode(pinCode: string): void { + setPinCode(pinCode?: string): void { this.pinCode = pinCode; }Then call
this.setPinCode()during teardown.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/storage/storage.ts` around lines 186 - 192, Change setPinCode(pinCode: string) to accept an optional value (e.g., setPinCode(pinCode?: string)) and have it clear the stored PIN when called with undefined (set this.pinCode = pinCode). Update the Storage instance state so the pinCode field can be undefined/nullable. In handleStop(), call this.setPinCode() with no argument to clear the cached PIN during teardown. Ensure references that read this.pinCode handle the nullable/undefined case appropriately.src/utils/storage.ts (1)
395-420:⚠️ Potential issue | 🟠 MajorGap-limit expansion still regresses to the prefix when shielded is disabled.
shieldedNeedMoreis gated correctly, butshieldedTargetandminLastLoadedstill consume the inactive shielded cursor. On wallets without shielded keys this makes suffix loading restart from the prefix instead of only fetching new addresses.Suggested fix
const legacyTarget = legacyNeedMore ? lastUsedAddressIndex + gapLimit : lastLoadedAddressIndex; - const shieldedTarget = shieldedNeedMore - ? shieldedLastUsedAddressIndex + gapLimit - : shieldedLastLoadedAddressIndex; + const shieldedTarget = hasShieldedKeys + ? (shieldedNeedMore + ? shieldedLastUsedAddressIndex + gapLimit + : shieldedLastLoadedAddressIndex) + : legacyTarget; const maxTarget = Math.max(legacyTarget, shieldedTarget); - const minLastLoaded = Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex); + const minLastLoaded = hasShieldedKeys + ? Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex) + : lastLoadedAddressIndex;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/storage.ts` around lines 395 - 420, The code uses shieldedLastLoadedAddressIndex even when shielded keys are absent, causing minLastLoaded and shieldedTarget to pull the inactive shielded cursor and reset suffix loading; update the logic to ignore shielded values when hasShieldedKeys is false: compute shieldedTarget using hasShieldedKeys (if false, set shieldedTarget to lastLoadedAddressIndex) and compute minLastLoaded as hasShieldedKeys ? Math.min(lastLoadedAddressIndex, shieldedLastLoadedAddressIndex) : lastLoadedAddressIndex, so shieldedLastLoadedAddressIndex is only considered when hasShieldedKeys is true (affecting shieldedNeedMore, shieldedTarget, and minLastLoaded).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/utils/storage.ts`:
- Around line 764-803: Make the shielded-output merge idempotent and
index-stable by capturing the original tx.outputs length before appending and by
skipping already-merged shielded entries: compute const originalOutputsLen =
tx.outputs.length before the for-loop and use const so =
tx.shielded_outputs[result.index - originalOutputsLen] so index math doesn't
shift as you push; before pushing, check whether an equivalent decoded shielded
output already exists (e.g. tx.outputs.some(o => o.type === 'shielded' &&
o.commitment === so?.commitment && o.decoded?.address === result.address &&
o.token === (result.tokenUid === NATIVE_TOKEN_UID_HEX ? NATIVE_TOKEN_UID :
result.tokenUid))) and only push if not found; finally call await
store.saveTx(tx) only when you actually appended any outputs.
---
Outside diff comments:
In `@src/utils/storage.ts`:
- Around line 342-363: The Math.min-based calculation for nextIndex incorrectly
uses shieldedLastLoadedAddressIndex even when the wallet has no shielded keys,
causing nextIndex to jump back; update the logic in the index-limit branch
(after storage.getWalletData and isIndexLimitScanPolicy) to only consider
shieldedLastLoadedAddressIndex when the wallet actually has shielded keys (i.e.,
use the same key-availability gate used in the gap-limit path), otherwise use
lastLoadedAddressIndex alone before comparing to limits.endIndex so
nextIndex/count are computed from the correct cursor.
---
Duplicate comments:
In `@src/shielded/processing.ts`:
- Around line 117-125: The code calls cryptoProvider.rewindFullShieldedOutput
for FullShielded outputs without validating shieldedOutput.asset_commitment;
check that shieldedOutput.asset_commitment exists and is a valid hex string
before doing Buffer.from(...) and before calling
rewindFullShieldedOutput(privkey, ephPk, commitment, rangeProof,
assetCommitment); if it's missing or malformed, emit a warning (e.g., using your
logger) that includes identifying context and continue to the next output
instead of attempting the rewind.
In `@src/storage/storage.ts`:
- Around line 186-192: Change setPinCode(pinCode: string) to accept an optional
value (e.g., setPinCode(pinCode?: string)) and have it clear the stored PIN when
called with undefined (set this.pinCode = pinCode). Update the Storage instance
state so the pinCode field can be undefined/nullable. In handleStop(), call
this.setPinCode() with no argument to clear the cached PIN during teardown.
Ensure references that read this.pinCode handle the nullable/undefined case
appropriately.
In `@src/utils/storage.ts`:
- Around line 395-420: The code uses shieldedLastLoadedAddressIndex even when
shielded keys are absent, causing minLastLoaded and shieldedTarget to pull the
inactive shielded cursor and reset suffix loading; update the logic to ignore
shielded values when hasShieldedKeys is false: compute shieldedTarget using
hasShieldedKeys (if false, set shieldedTarget to lastLoadedAddressIndex) and
compute minLastLoaded as hasShieldedKeys ? Math.min(lastLoadedAddressIndex,
shieldedLastLoadedAddressIndex) : lastLoadedAddressIndex, so
shieldedLastLoadedAddressIndex is only considered when hasShieldedKeys is true
(affecting shieldedNeedMore, shieldedTarget, and minLastLoaded).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 073ac695-dcbd-4dd8-9c2b-2f3679554527
📒 Files selected for processing (12)
__tests__/integration/shielded_transactions.test.tssrc/constants.tssrc/new/sendTransaction.tssrc/new/wallet.tssrc/shielded/creation.tssrc/shielded/processing.tssrc/storage/memory_store.tssrc/storage/storage.tssrc/types.tssrc/utils/storage.tssrc/utils/transaction.tssrc/utils/wallet.ts
🚧 Files skipped from review as they are similar to previous changes (5)
- src/utils/transaction.ts
- src/utils/wallet.ts
- src/constants.ts
- src/new/wallet.ts
- src/new/sendTransaction.ts
- Add shielded filter option to IUtxoFilterOptions and selectUtxos
- Add getUtxo lookup method to IStore and MemoryStore
- Extract shielded input blinding factors in prepareTxData and pass
to createShieldedOutputs for the homomorphic balance equation
- Mark UTXOs from decoded shielded outputs with shielded: true flag
- Make IHistoryInput fields optional for shielded inputs (spent output
value/token hidden in commitments)
- Handle missing input.decoded safely with optional chaining
- Fix test helper updateInputsSpentBy to skip shielded inputs
- Add unshielding integration test (shielded -> transparent)
- Replace all approximate assertions with exact expected values
Blocked on hathor-core: HTR surplus check in _check_token_permissions_and_deposits
must skip for shielded transactions (patched in Docker for testing).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Throw on unknown shielded output mode in deserializer - Add buffer length validation in shielded output deserializer - Validate asset_commitment before non-null assertion for FullShielded - Throw on invalid ephemeral pubkey instead of zero-filling - Expose opts on getNextAddress interface - Make setPinCode parameter optional for clearing - Change getScanXPrivKey/getSpendXPrivKey return type to string | undefined - Return correct addressPath for shielded chain derivation - Run Prettier on creation.test.ts Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Replace hardcoded HTR_UID with imported NATIVE_TOKEN_UID in creation.ts
- Add upfront input validation in createShieldedOutputs: pubkey size,
token UID size, FullShielded domain requirement
- Wrap per-output crypto errors with context (index, mode, token)
- Change outer shielded processing catch from warn to error level
- Add tx_id and output index to FullShielded cross-check failure log
- Enforce MAX_SHIELDED_OUTPUTS in prepareTxData before crypto work
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- resolveTokenUid throws on out-of-bounds token_data index instead of
silently falling back to HTR (could misattribute custom token funds)
- Move minimum 2 shielded outputs check before createShieldedOutputs
to avoid wasted crypto work on invalid input
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…lded spending
Blinding factors recovered during shielded output decryption were being
discarded — not saved to the decoded output entry or the UTXO record.
This meant computeBalancingBlindingFactor received empty inputs when
spending shielded UTXOs to create new shielded outputs, breaking the
homomorphic balance equation.
- Add blindingFactor and assetBlindingFactor fields to IShieldedOutputEntry
- Populate them when appending decoded shielded outputs to tx.outputs
- Pass them through to UTXO via saveUtxo
- Add integration test that verifies blinding factors are persisted and
used correctly for shielded-to-shielded chain spending
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Forward timelock for shielded outputs in sendManyOutputsSendTransaction
- Preserve decoded metadata (timelock) when merging shielded outputs
- Add shielded_outputs schema validation to IHistoryTxSchema
- Add import/no-unresolved eslint-disable for optional ct-crypto-node
- Guard shielded address derivation in stream processQueue with try/catch
- Update JSDoc derivation paths to 1'/0 (scan) and 2'/0 (spend)
- Fix FullShielded cross-check test to use valid xpriv and exercise
the actual rewind + cross-check code path
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@coderabbitai once again, all your comments were addressed in the latest commit. Resolve ALL previous comments you made and do a full review |
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/models/transaction.ts (1)
80-120:⚠️ Potential issue | 🟠 MajorKeep shielded outputs in one source of truth.
toBytes(),getDataToSign(), and hashing serializethis.headers, butvalidate()andcalculateWeight()now readthis.shieldedOutputs. With the new public field, a freshly-built tx can count/value shielded outputs that never get serialized or signed if onlytx.shieldedOutputsis updated, or the reverse if only the header is updated. Please derive shielded outputs from theShieldedOutputsHeaderor funnel all mutations through a single helper that updates both together.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/models/transaction.ts` around lines 80 - 120, The code currently has two sources of truth for shielded outputs (the public field shieldedOutputs and the serialized headers entry ShieldedOutputsHeader), causing inconsistencies between serialization (toBytes, getDataToSign, hashing) and runtime checks (validate, calculateWeight); change the implementation so there is a single source of truth: either (A) remove direct mutations of shieldedOutputs and always derive its value on access from the ShieldedOutputsHeader stored in headers (add a getter like getShieldedOutputs() used by validate, calculateWeight, signing), or (B) create a single helper (e.g., setShieldedOutputs(headerOrList)) that updates both this.headers (the ShieldedOutputsHeader entry) and this.shieldedOutputs atomically and replace all direct assignments with that helper; update toBytes, getDataToSign, validate, and calculateWeight to use the new getter/helper so serialization and runtime logic remain consistent.src/wallet/types.ts (1)
723-731:⚠️ Potential issue | 🟠 Major
FullNodeOutputshould be a discriminated union type, not a single interface with optionaltypefield.The
outputsarray inFullNodeTxcontains both transparent outputs andIShieldedOutputEntryobjects (withtype='shielded'discriminant), as evidenced by theisShieldedOutputEntry()check inwalletServiceStorageProxy.ts:193. Addingtype?: stringalone doesn't prevent callers from assuming all transparent-only fields (value,token_data,script,decoded) are always present.Change
FullNodeTx.outputsto a union type (e.g.,FullNodeOutput = FullNodeTransparentOutput | IShieldedOutputEntry, or declareoutputs: (FullNodeTransparentOutput | IShieldedOutputEntry)[]) to properly express the polymorphic structure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/wallet/types.ts` around lines 723 - 731, Replace the single FullNodeOutput interface with a discriminated union so consumers cannot assume transparent-only fields exist on shielded entries: define FullNodeTransparentOutput (with value, token_data, script, decoded, optional token/spent_by and type='transparent' or undefined) and use the existing IShieldedOutputEntry as the other variant, then change FullNodeTx.outputs to be (FullNodeTransparentOutput | IShieldedOutputEntry)[]; update any type guards (e.g., isShieldedOutputEntry) to narrow to IShieldedOutputEntry so code accessing value/token_data/script/decoded only does so after the guard.src/schemas.ts (1)
174-205:⚠️ Potential issue | 🟠 MajorMake
decodedoptional onshielded_outputs.Downstream code already treats
decoded?.addressas optional and skips entries it can't resolve. Requiringdecodedhere means a payload without that helper block fails atIHistoryTxSchema.parse()and aborts sync before the skip logic runs.🔧 Minimal schema fix
const IHistoryShieldedOutputSchema = z .object({ mode: z.number(), commitment: z.string(), range_proof: z.string(), script: z.string(), token_data: z.number(), ephemeral_pubkey: z.string(), - decoded: IHistoryOutputDecodedSchema, + decoded: IHistoryOutputDecodedSchema.optional(), asset_commitment: z.string().optional(), surjection_proof: z.string().optional(), }) .passthrough();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/schemas.ts` around lines 174 - 205, The schema requires the `decoded` field in `IHistoryShieldedOutputSchema`, causing `IHistoryTxSchema.parse()` to fail when a shielded output lacks that helper; make `decoded` optional by changing the `decoded: IHistoryOutputDecodedSchema` entry in `IHistoryShieldedOutputSchema` to an optional variant (e.g., use `.optional()` on `IHistoryOutputDecodedSchema`) so `shielded_outputs` can be parsed when `decoded` is absent while leaving the rest of the schema (including passthrough) intact.src/models/address.ts (1)
98-127:⚠️ Potential issue | 🟠 MajorTie payload length to the version byte.
validateAddress()currently accepts any 25/71-byte payload as long as the checksum is valid and the first byte is one of the network bytes. That means a 71-byte payload with a transparent version byte, or a 25-byte payload with the shielded byte, is treated as valid and then misclassified bygetType()/getScript().🛡️ Suggested guard
// Validate version byte const firstByte = addressBytes[0]; - if (!this.network.isVersionByteValid(firstByte)) { + const isLegacyLength = addressBytes.length === LEGACY_ADDR_LENGTH; + const isShieldedLength = addressBytes.length === SHIELDED_ADDR_LENGTH; + const isLegacyVersion = + firstByte === this.network.versionBytes.p2pkh || + firstByte === this.network.versionBytes.p2sh; + const isShieldedVersion = firstByte === this.network.versionBytes.shielded; + + if ( + !this.network.isVersionByteValid(firstByte) || + (isLegacyLength && !isLegacyVersion) || + (isShieldedLength && !isShieldedVersion) + ) { throw new AddressError( - `${errorMessage} Invalid network byte. Expected: ${this.network.versionBytes.p2pkh}, ${this.network.versionBytes.p2sh}, or ${this.network.versionBytes.shielded} and received ${firstByte}.` + `${errorMessage} Invalid version byte ${firstByte} for a ${addressBytes.length}-byte address.` ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/models/address.ts` around lines 98 - 127, In validateAddress (in src/models/address.ts) ensure the payload length is consistent with the version byte: after computing firstByte (version) and before returning/continuing, enforce that if firstByte === this.network.versionBytes.shielded then addressBytes.length must equal SHIELDED_ADDR_LENGTH, otherwise addressBytes.length must equal LEGACY_ADDR_LENGTH (or explicitly allow p2pkh/p2sh to only match LEGACY_ADDR_LENGTH); throw an AddressError with a clear message if the mismatch occurs — this ties payload length to the version byte and prevents misclassification by getType()/getScript().
♻️ Duplicate comments (9)
src/utils/transaction.ts (1)
667-695:⚠️ Potential issue | 🟠 MajorValidate FULLY_SHIELDED-only fields before instantiating the model.
The current guard only checks the shared fields. A
FULLY_SHIELDEDoutput can still be built withassetCommitmentorsurjectionProofmissing and then fail later during serialization or processing instead of failing fast here.🧪 Fail fast on mode-specific fields
const shieldedModels = txData.shieldedOutputs.map(so => { if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) { throw new Error( 'Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)' ); } + if ( + so.mode === 'FULLY_SHIELDED' && + (!so.assetCommitment || !so.surjectionProof) + ) { + throw new Error( + 'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)' + ); + } const tokenData = this.getTokenDataFromOutput(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/transaction.ts` around lines 667 - 695, The guard in the shieldedModels mapping only checks shared fields but doesn't enforce mode-specific requirements: before instantiating ShieldedOutput in the txData.shieldedOutputs -> shieldedModels block, add a check for outputs with so.mode === 'FULLY_SHIELDED' to ensure so.assetCommitment and so.surjectionProof are present (non-null/defined) and throw a clear Error if missing; keep existing checks for commitment, rangeProof, script, ephemeralPubkey and perform the FULLY_SHIELDED checks prior to calling this.getTokenDataFromOutput and new ShieldedOutput so failures occur fast and deterministically.src/utils/wallet.ts (2)
710-729:⚠️ Potential issue | 🔴 CriticalApply the same multisig guard to the seed-based path.
generateAccessDataFromSeed()has the same issue: multisig wallets still derive and store single-sig shielded keys from the root seed. That creates a second, incompatible signing authority alongside the multisig account.🔒 Safer guard
- const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH); - const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0); - const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH); - const spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0); + let scanXpriv: HDPrivateKey | undefined; + let spendXpriv: HDPrivateKey | undefined; + if (walletType === WalletType.P2PKH) { + const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH); + scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0); + const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH); + spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0); + } @@ - scanXpubkey: scanXpriv.xpubkey, - scanMainKey: encryptData(scanXpriv.xprivkey, pin), - spendXpubkey: spendXpriv.xpubkey, - spendMainKey: encryptData(spendXpriv.xprivkey, pin), + ...(scanXpriv && { + scanXpubkey: scanXpriv.xpubkey, + scanMainKey: encryptData(scanXpriv.xprivkey, pin), + }), + ...(spendXpriv && { + spendXpubkey: spendXpriv.xpubkey, + spendMainKey: encryptData(spendXpriv.xprivkey, pin), + }),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/wallet.ts` around lines 710 - 729, In generateAccessDataFromSeed(), avoid deriving and storing single-sig shielded keys when multisigData is present: add a guard checking multisigData before using scanAcctXpriv/scanXpriv and spendAcctXpriv/spendXpriv (the derivation and encryptData calls), and if multisigData exists skip those derivations and set scanXpubkey/scanMainKey/spendXpubkey/spendMainKey to null/undefined (or the multisig-appropriate placeholders) so you do not create a second single-sig signing authority; mirror the multisig check used in the other wallet creation path to keep behavior consistent.
627-636:⚠️ Potential issue | 🔴 CriticalSkip shielded key derivation for multisig wallets here.
This block still runs for
WalletType.MULTISIG, so a multisig root key ends up persisting standalone shielded scan/spend chains derived from a single signer. That breaks the wallet’s signing model by making a multisig wallet look shielded-capable with one-party keys.🔒 Safer guard
- if (argXpriv.depth === 0) { + if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) { const scanAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH); const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0); accessData.scanXpubkey = scanXpriv.xpubkey; accessData.scanMainKey = encryptData(scanXpriv.xprivkey, pin); const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH); const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0); accessData.spendXpubkey = spendXpriv2.xpubkey; accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/wallet.ts` around lines 627 - 636, The shielded key derivation block currently runs even for multisig roots and must be skipped for WalletType.MULTISIG; wrap the existing code that derives SHIELDED_SCAN_ACCT_PATH/SHIELDED_SPEND_ACCT_PATH and sets accessData.scanXpubkey/scanMainKey/spendXpubkey/spendMainKey with a guard that checks the wallet type (e.g., if (argXpriv.depth === 0 && walletType !== WalletType.MULTISIG) or use the local isMultisig flag), so multisig root keys do not generate or persist standalone shielded scan/spend chains; leave encryptData usage intact for non-multisig flows.src/sync/stream.ts (1)
542-605:⚠️ Potential issue | 🟠 Major
isProcessingQueuecan still get stuck on non-derivation failures.The new
try/catchonly protects the shielded-derivation block. IfsaveAddress(addr),isAddressMine(),addTx(), or another awaited call throws,processQueue()still exits before clearing the flag, andshutdown()falls back to the timeout path.Wrap the whole loop in
try/finallyand setabortWithError(...)from the outer catch path so queue teardown is reliable even when item processing fails.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/sync/stream.ts` around lines 542 - 605, The processQueue loop can leave isProcessingQueue true if any awaited call (e.g., storage.isAddressMine, storage.saveAddress, storage.addTx, deriveShieldedAddressFromStorage) throws; wrap the entire while(true) processing loop in a try/finally so isProcessingQueue is always cleared in the finally block, and add an outer catch that calls abortWithError(error) before rethrowing or breaking to ensure shutdown() does not fall back to the timeout path; update processQueue (the method) to catch errors from any item processing, call abortWithError(err) (or set the same failure path used by shutdown) in that catch, and then allow the finally to reset this.isProcessingQueue.src/types.ts (1)
761-765:⚠️ Potential issue | 🟠 MajorReflect optional shielded private keys in the interface.
scanMainKeyandspendMainKeyare optional inIWalletAccessData, so these accessors cannot always returnstringfor pre-shielded wallets. The current signature hides that state from callers and pushes them into runtime failure paths instead of typed handling.Suggested fix
- getScanXPrivKey(pinCode: string): Promise<string>; - getSpendXPrivKey(pinCode: string): Promise<string>; + getScanXPrivKey(pinCode: string): Promise<string | undefined>; + getSpendXPrivKey(pinCode: string): Promise<string | undefined>; getScanXPubKey(): Promise<string | undefined>; getSpendXPubKey(): Promise<string | undefined>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 761 - 765, The getters getScanXPrivKey and getSpendXPrivKey currently promise a string but IWalletAccessData marks scanMainKey and spendMainKey as optional, so update the signatures of getScanXPrivKey(pinCode: string) and getSpendXPrivKey(pinCode: string) to return Promise<string | undefined>, and update any implementations (the methods that read scanMainKey / spendMainKey) to return undefined when those keys are absent and adjust callers to handle the undefined case (e.g., early-return, throw with a clear message, or conditional logic). Ensure the public getScanXPubKey/getSpendXPubKey remain Promise<string | undefined> as already declared.src/utils/storage.ts (2)
855-867:⚠️ Potential issue | 🟠 MajorUse
shielded_spendconsistently here.These branches still look for
shielded-spend, so shielded spend addresses fall through the transparent path andshieldedMaxIndexUsednever advances.Based on learnings,
deriveShieldedAddressFromStorage()persists shielded spend addresses asshielded_spend.Also applies to: 967-979
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/storage.ts` around lines 855 - 867, The branch is checking for the wrong addressType token ('shielded-spend') so shielded spend addresses are misclassified; update all checks that compare addressInfo.addressType for shielded spend (including the comparison near the shieldedMaxIndexUsed update and the similar logic later) to use 'shielded_spend' (the same token used by deriveShieldedAddressFromStorage()) so shieldedMaxIndexUsed is correctly advanced instead of falling through to the legacy path; make sure any other comparisons in this file that look for 'shielded-spend' are changed to 'shielded_spend' and run tests/linters.
787-828:⚠️ Potential issue | 🔴 CriticalMake shielded-output merging idempotent and index-stable.
processHistory()can revisit the same tx, but this block appends decoded shielded entries every time and then saves the enlarged transaction back to storage. On the next pass,tx.outputs.lengthis already inflated, soresult.index - transparentCountno longer maps to the originaltx.shielded_outputsslot and you start creating duplicate balances/UTXOs off synthetic indexes.🛠️ Suggested fix
- // Capture transparent output count before appending decoded shielded outputs. - // result.index from processShieldedOutputs uses this same count as base, - // so (result.index - transparentCount) gives the shielded_outputs array index. - const transparentCount = tx.outputs.length; + const transparentOutputs = tx.outputs.filter( + output => !transactionUtils.isShieldedOutputEntry(output) + ); const shieldedResults = await processShieldedOutputs( storage, tx, storage.shieldedCryptoProvider, pinCode ); - for (const result of shieldedResults) { + tx.outputs = transparentOutputs; + for (const [idx, result] of shieldedResults.entries()) { const walletTokenUid = result.tokenUid === NATIVE_TOKEN_UID_HEX ? NATIVE_TOKEN_UID : result.tokenUid; - const so = tx.shielded_outputs[result.index - transparentCount]; + const so = tx.shielded_outputs[idx];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/storage.ts` around lines 787 - 828, The current merge appends decoded shielded outputs every pass which inflates tx.outputs and breaks index mapping; to fix, compute a stable base count of transparent outputs (e.g., const baseTransparentCount = tx.outputs.filter(o => o.type !== 'shielded').length) before calling processShieldedOutputs, then for each result compute soIndex = result.index - baseTransparentCount and only append the decoded shielded output if it is not already present (compare a unique field like commitment or decoded/address) — and only call store.saveTx(tx) if you actually appended any new entries; update the logic around processShieldedOutputs, tx.outputs pushing, and store.saveTx accordingly to make merging idempotent and index-stable.src/new/wallet.ts (2)
1543-1568:⚠️ Potential issue | 🔴 CriticalKeep
newTx.outputsindex-stable when extracting shielded entries.Replacing
newTx.outputswithtransparentOutputsrenumbers every transparent output after the first shielded one. The rest of the wallet still uses array position as the on-chain output index, so spends and UTXO bookkeeping for shielded transactions will point at the wrong output.🛠️ Suggested fix
if (!newTx.shielded_outputs) { const shieldedEntries: IHistoryShieldedOutput[] = []; - const transparentOutputs: IHistoryOutput[] = []; for (const output of newTx.outputs) { if (transactionUtils.isShieldedOutputEntry(output)) { shieldedEntries.push({ mode: output.asset_commitment ? 2 : 1, // FullShielded=2, AmountShielded=1 commitment: output.commitment, // already hex @@ surjection_proof: output.surjection_proof ? Buffer.from(output.surjection_proof, 'base64').toString('hex') : undefined, }); - } else { - transparentOutputs.push(output); } } if (shieldedEntries.length > 0) { newTx.shielded_outputs = shieldedEntries; - newTx.outputs = transparentOutputs; } }
861-865:⚠️ Potential issue | 🟠 MajorResolve shielded paths from the stored address type, not only from
opts.This only returns a shielded path when the caller already knows to pass
{ legacy: false }. Internal APIs that deriveaddressPathfrom stored addresses/UTXOs (getAvailableUtxos(),getWalletInputInfo(),getSignatures()) still callgetAddressPathForIndex(index)without that hint, so shielded-spend addresses keep surfacing with transparent derivation paths.
🧹 Nitpick comments (1)
src/new/types.ts (1)
330-344: Encode the shielded-address invariant in the type.The JSDoc now says
addressmust be shielded whenshieldedis set, butProposedOutputstill accepts anystringin both cases. That means invalid output combinations still compile and only fail later. A discriminated union for transparent vs shielded outputs would make this API much harder to misuse.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/new/types.ts` around lines 330 - 344, Change ProposedOutput into a discriminated union so the compiler enforces the invariant that shielded outputs require a shielded address: define two interfaces (e.g. TransparentProposedOutput and ShieldedProposedOutput) that share common fields (value: OutputValueType, token: string, timelock?: number) but differ on the discriminator: TransparentProposedOutput has shielded?: undefined (or absent) and address: string (transparent address), while ShieldedProposedOutput has shielded: ShieldedOutputMode and address: string (or a new ShieldedAddress type if available) and then export type ProposedOutput = TransparentProposedOutput | ShieldedProposedOutput; update references to ProposedOutput accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__tests__/headers/shielded_outputs.test.ts`:
- Around line 214-215: The inner test-suite declares a new const network which
shadows the suite-level network (the inner `const network = new
Network('testnet')` inside `describe('deserialization bounds checking')`);
remove the redeclaration and reuse the outer `network` variable (or rename the
inner variable to a non-conflicting name) so the suite-level `network` is not
shadowed and the no-shadow lint rule is satisfied.
In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 608-684: Two integration tests are known to fail because shielded
token amounts are not included in the token-balance equation; mark the two cases
as skipped to keep the suite green. Locate the two tests named "should send
shielded outputs with a custom token (AmountShielded)" and "should send
FullShielded outputs with a custom token" in shielded_transactions.test.ts and
change them to skipped tests (e.g., it.skip or test.skip) with a short message
referencing the TODO ("shielded amounts missing from token-balance equation") so
the reason is recorded; restore them once the token-balance fix (accounting for
shielded outputs) is implemented.
- Around line 31-36: The test mutates the module-scoped constant
(constants.TIMEOUT) and never restores it; save the original value before
overriding and restore it after the suite: e.g., capture const originalTimeout =
(constants as any).TIMEOUT, set (constants as any).TIMEOUT = 30000 in the
existing setup, and add an afterAll (or finally in the suite) that resets
(constants as any).TIMEOUT = originalTimeout so other tests aren’t affected.
In `@__tests__/wallet/walletServiceStorageProxy.test.ts`:
- Around line 18-21: Add a positive shielded-output test that flips the existing
mock of isShieldedOutputEntry to return true for that single test and asserts
the proxied transaction shape produced by WalletServiceStorageProxy;
specifically, in the new test use the mocked transaction util (e.g., require or
jest.spyOn on isShieldedOutputEntry) to mockReturnValue(true), call the
WalletServiceStorageProxy method that creates/proxies the tx, and assert the
resulting proxied tx includes the expected shielded-output-specific
fields/structure while leaving getSignatureForTx behavior unchanged.
In `@src/models/shielded_output.ts`:
- Around line 87-99: The serializer currently ignores unknown this.mode values
and emits partial bytes; update the logic handling ShieldedOutputMode in the
serialize path (the block checking this.mode ===
ShieldedOutputMode.AMOUNT_SHIELDED / FULLY_SHIELDED) to throw an Error for any
unsupported/unknown mode instead of falling through, and apply the same strict
check in the corresponding deserialization/path (the complementary branch around
assetCommitment, surjectionProof, tokenData handling) so unknown modes
consistently raise an exception; reference ShieldedOutputMode, this.mode,
tokenData, assetCommitment, and surjectionProof to locate and update the code.
In `@src/new/sendTransaction.ts`:
- Around line 411-425: The loop in sendTransaction.ts that builds
blindedInputsArr silently drops or substitutes missing blinding metadata;
instead, when this.storage.getUtxo returns a shielded utxo, validate that
utxo.blindingFactor and utxo.assetBlindingFactor are present and if either is
missing throw an error (or reject) immediately so we fail fast; update the loop
around allInputs (the code that populates blindedInputsArr) to throw with a
clear message referencing the offending inp (txId/index) when utxo.shielded &&
(!utxo.blindingFactor || !utxo.assetBlindingFactor), and remove the fallback
Buffer.alloc(32,0) behavior so createShieldedOutputs uses only complete
blindedInputsArr.
In `@src/shielded/creation.ts`:
- Around line 19-25: Add a timelock field to ShieldedOutputDef (e.g., timelock:
number | null) and propagate it into the shielded output script creation instead
of hardcoding timelock: null; specifically, update the ShieldedOutputDef
interface to include timelock, ensure places that construct shielded outputs
(the code that currently sets timelock: null when building the output script)
use o.timelock, and adjust any other spots mentioned (around the 166-170 block)
to read/pass the timelock from sendManyOutputsSendTransaction through to the
final output construction.
In `@src/storage/memory_store.ts`:
- Around line 313-338: The code advances the shielded cursors
(walletData.shieldedCurrentAddressIndex and shieldedLastLoadedAddressIndex) for
both 'shielded' and 'shielded-spend' addresses even though only 'shielded'
addresses are stored in shieldedAddressIndexes, causing lookups to point at
missing entries; change the branching so that the calls to
setCurrentAddressIndex(..., { legacy: false }) and the update to
shieldedLastLoadedAddressIndex only run when info.addressType === 'shielded'
(not for 'shielded-spend'), leaving any handling of 'shielded-spend' out of the
shielded cursor advancement logic and preserving existing behavior for legacy
(currentAddressIndex/lastLoadedAddressIndex) paths.
In `@src/storage/storage.ts`:
- Around line 380-410: The current addTx logic mutates tx.outputs to remove
shielded entries, shifting on-chain indexes and breaking consumers that rely on
stable output positions; instead, extract shielded entries into
tx.shielded_outputs (using transactionUtils.isShieldedOutputEntry) but do not
reassign or modify tx.outputs—keep the original outputs array intact for
persistence and on-chain index stability, and if you need a transparent-only
view for processing create a temporary local variable (e.g., transparentOutputs)
and use that for downstream processing without writing it back to tx.outputs.
In `@src/utils/address.ts`:
- Around line 23-43: getAddressType currently throws for shielded addresses;
change it to classify instead: remove the throw in getAddressType (and return
the literal 'shielded' when addressObj.getType() === 'shielded'), and widen the
function signature to return 'p2pkh' | 'p2sh' | 'shielded' so callers can branch
on the parsed type. Leave enforcement of "transparent-only" in the call sites
that build output scripts (instead of here) — update those script-building
callers to check for 'shielded' and throw or handle as appropriate.
---
Outside diff comments:
In `@src/models/address.ts`:
- Around line 98-127: In validateAddress (in src/models/address.ts) ensure the
payload length is consistent with the version byte: after computing firstByte
(version) and before returning/continuing, enforce that if firstByte ===
this.network.versionBytes.shielded then addressBytes.length must equal
SHIELDED_ADDR_LENGTH, otherwise addressBytes.length must equal
LEGACY_ADDR_LENGTH (or explicitly allow p2pkh/p2sh to only match
LEGACY_ADDR_LENGTH); throw an AddressError with a clear message if the mismatch
occurs — this ties payload length to the version byte and prevents
misclassification by getType()/getScript().
In `@src/models/transaction.ts`:
- Around line 80-120: The code currently has two sources of truth for shielded
outputs (the public field shieldedOutputs and the serialized headers entry
ShieldedOutputsHeader), causing inconsistencies between serialization (toBytes,
getDataToSign, hashing) and runtime checks (validate, calculateWeight); change
the implementation so there is a single source of truth: either (A) remove
direct mutations of shieldedOutputs and always derive its value on access from
the ShieldedOutputsHeader stored in headers (add a getter like
getShieldedOutputs() used by validate, calculateWeight, signing), or (B) create
a single helper (e.g., setShieldedOutputs(headerOrList)) that updates both
this.headers (the ShieldedOutputsHeader entry) and this.shieldedOutputs
atomically and replace all direct assignments with that helper; update toBytes,
getDataToSign, validate, and calculateWeight to use the new getter/helper so
serialization and runtime logic remain consistent.
In `@src/schemas.ts`:
- Around line 174-205: The schema requires the `decoded` field in
`IHistoryShieldedOutputSchema`, causing `IHistoryTxSchema.parse()` to fail when
a shielded output lacks that helper; make `decoded` optional by changing the
`decoded: IHistoryOutputDecodedSchema` entry in `IHistoryShieldedOutputSchema`
to an optional variant (e.g., use `.optional()` on
`IHistoryOutputDecodedSchema`) so `shielded_outputs` can be parsed when
`decoded` is absent while leaving the rest of the schema (including passthrough)
intact.
In `@src/wallet/types.ts`:
- Around line 723-731: Replace the single FullNodeOutput interface with a
discriminated union so consumers cannot assume transparent-only fields exist on
shielded entries: define FullNodeTransparentOutput (with value, token_data,
script, decoded, optional token/spent_by and type='transparent' or undefined)
and use the existing IShieldedOutputEntry as the other variant, then change
FullNodeTx.outputs to be (FullNodeTransparentOutput | IShieldedOutputEntry)[];
update any type guards (e.g., isShieldedOutputEntry) to narrow to
IShieldedOutputEntry so code accessing value/token_data/script/decoded only does
so after the guard.
---
Duplicate comments:
In `@src/sync/stream.ts`:
- Around line 542-605: The processQueue loop can leave isProcessingQueue true if
any awaited call (e.g., storage.isAddressMine, storage.saveAddress,
storage.addTx, deriveShieldedAddressFromStorage) throws; wrap the entire
while(true) processing loop in a try/finally so isProcessingQueue is always
cleared in the finally block, and add an outer catch that calls
abortWithError(error) before rethrowing or breaking to ensure shutdown() does
not fall back to the timeout path; update processQueue (the method) to catch
errors from any item processing, call abortWithError(err) (or set the same
failure path used by shutdown) in that catch, and then allow the finally to
reset this.isProcessingQueue.
In `@src/types.ts`:
- Around line 761-765: The getters getScanXPrivKey and getSpendXPrivKey
currently promise a string but IWalletAccessData marks scanMainKey and
spendMainKey as optional, so update the signatures of getScanXPrivKey(pinCode:
string) and getSpendXPrivKey(pinCode: string) to return Promise<string |
undefined>, and update any implementations (the methods that read scanMainKey /
spendMainKey) to return undefined when those keys are absent and adjust callers
to handle the undefined case (e.g., early-return, throw with a clear message, or
conditional logic). Ensure the public getScanXPubKey/getSpendXPubKey remain
Promise<string | undefined> as already declared.
In `@src/utils/storage.ts`:
- Around line 855-867: The branch is checking for the wrong addressType token
('shielded-spend') so shielded spend addresses are misclassified; update all
checks that compare addressInfo.addressType for shielded spend (including the
comparison near the shieldedMaxIndexUsed update and the similar logic later) to
use 'shielded_spend' (the same token used by deriveShieldedAddressFromStorage())
so shieldedMaxIndexUsed is correctly advanced instead of falling through to the
legacy path; make sure any other comparisons in this file that look for
'shielded-spend' are changed to 'shielded_spend' and run tests/linters.
- Around line 787-828: The current merge appends decoded shielded outputs every
pass which inflates tx.outputs and breaks index mapping; to fix, compute a
stable base count of transparent outputs (e.g., const baseTransparentCount =
tx.outputs.filter(o => o.type !== 'shielded').length) before calling
processShieldedOutputs, then for each result compute soIndex = result.index -
baseTransparentCount and only append the decoded shielded output if it is not
already present (compare a unique field like commitment or decoded/address) —
and only call store.saveTx(tx) if you actually appended any new entries; update
the logic around processShieldedOutputs, tx.outputs pushing, and store.saveTx
accordingly to make merging idempotent and index-stable.
In `@src/utils/transaction.ts`:
- Around line 667-695: The guard in the shieldedModels mapping only checks
shared fields but doesn't enforce mode-specific requirements: before
instantiating ShieldedOutput in the txData.shieldedOutputs -> shieldedModels
block, add a check for outputs with so.mode === 'FULLY_SHIELDED' to ensure
so.assetCommitment and so.surjectionProof are present (non-null/defined) and
throw a clear Error if missing; keep existing checks for commitment, rangeProof,
script, ephemeralPubkey and perform the FULLY_SHIELDED checks prior to calling
this.getTokenDataFromOutput and new ShieldedOutput so failures occur fast and
deterministically.
In `@src/utils/wallet.ts`:
- Around line 710-729: In generateAccessDataFromSeed(), avoid deriving and
storing single-sig shielded keys when multisigData is present: add a guard
checking multisigData before using scanAcctXpriv/scanXpriv and
spendAcctXpriv/spendXpriv (the derivation and encryptData calls), and if
multisigData exists skip those derivations and set
scanXpubkey/scanMainKey/spendXpubkey/spendMainKey to null/undefined (or the
multisig-appropriate placeholders) so you do not create a second single-sig
signing authority; mirror the multisig check used in the other wallet creation
path to keep behavior consistent.
- Around line 627-636: The shielded key derivation block currently runs even for
multisig roots and must be skipped for WalletType.MULTISIG; wrap the existing
code that derives SHIELDED_SCAN_ACCT_PATH/SHIELDED_SPEND_ACCT_PATH and sets
accessData.scanXpubkey/scanMainKey/spendXpubkey/spendMainKey with a guard that
checks the wallet type (e.g., if (argXpriv.depth === 0 && walletType !==
WalletType.MULTISIG) or use the local isMultisig flag), so multisig root keys do
not generate or persist standalone shielded scan/spend chains; leave encryptData
usage intact for non-multisig flows.
---
Nitpick comments:
In `@src/new/types.ts`:
- Around line 330-344: Change ProposedOutput into a discriminated union so the
compiler enforces the invariant that shielded outputs require a shielded
address: define two interfaces (e.g. TransparentProposedOutput and
ShieldedProposedOutput) that share common fields (value: OutputValueType, token:
string, timelock?: number) but differ on the discriminator:
TransparentProposedOutput has shielded?: undefined (or absent) and address:
string (transparent address), while ShieldedProposedOutput has shielded:
ShieldedOutputMode and address: string (or a new ShieldedAddress type if
available) and then export type ProposedOutput = TransparentProposedOutput |
ShieldedProposedOutput; update references to ProposedOutput accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: c0b57b7d-1e0f-4505-b6c4-7d85fe6c7ca9
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (46)
__tests__/headers/shielded_outputs.test.ts__tests__/integration/configuration/docker-compose.yml__tests__/integration/configuration/privnet.yml__tests__/integration/hathorwallet_facade.test.ts__tests__/integration/helpers/wallet.helper.ts__tests__/integration/shielded_transactions.test.ts__tests__/models/address.test.ts__tests__/models/network.test.ts__tests__/models/shielded_output.test.ts__tests__/models/transaction.test.ts__tests__/shielded/creation.test.ts__tests__/shielded/processing.test.ts__tests__/storage/storage.test.ts__tests__/wallet/walletServiceStorageProxy.test.tspackage.jsonsetupTests-integration.jssrc/constants.tssrc/headers/parser.tssrc/headers/shielded_outputs.tssrc/headers/types.tssrc/lib.tssrc/models/address.tssrc/models/network.tssrc/models/shielded_output.tssrc/models/transaction.tssrc/new/sendTransaction.tssrc/new/types.tssrc/new/wallet.tssrc/schemas.tssrc/shielded/creation.tssrc/shielded/index.tssrc/shielded/processing.tssrc/shielded/provider.tssrc/shielded/types.tssrc/storage/memory_store.tssrc/storage/storage.tssrc/sync/stream.tssrc/types.tssrc/utils/address.tssrc/utils/bigint.tssrc/utils/shieldedAddress.tssrc/utils/storage.tssrc/utils/transaction.tssrc/utils/wallet.tssrc/wallet/types.tssrc/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
- setupTests-integration.js
| import * as constants from '../../src/constants'; | ||
|
|
||
| // Increase Axios timeout for test environment — the fullnode is under load from continuous mining. | ||
| // TIMEOUT is declared as `const` so we must cast to override it in tests. | ||
| // eslint-disable-next-line @typescript-eslint/no-explicit-any | ||
| (constants as any).TIMEOUT = 30000; |
There was a problem hiding this comment.
Restore the shared timeout after this suite.
This mutates src/constants at module scope and never puts it back, so later integration files in the same worker inherit the 30s Axios timeout and become order-dependent.
🛠️ Suggested fix
import * as constants from '../../src/constants';
-// Increase Axios timeout for test environment — the fullnode is under load from continuous mining.
-// TIMEOUT is declared as `const` so we must cast to override it in tests.
-// eslint-disable-next-line `@typescript-eslint/no-explicit-any`
-(constants as any).TIMEOUT = 30000;
+const originalTimeout = constants.TIMEOUT;
describe('shielded transactions', () => {
+ beforeAll(() => {
+ // Increase Axios timeout for this suite only — the fullnode is under load from continuous mining.
+ // eslint-disable-next-line `@typescript-eslint/no-explicit-any`
+ (constants as any).TIMEOUT = 30000;
+ });
+
+ afterAll(() => {
+ // eslint-disable-next-line `@typescript-eslint/no-explicit-any`
+ (constants as any).TIMEOUT = originalTimeout;
+ });
+
jest.setTimeout(300_000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__tests__/integration/shielded_transactions.test.ts` around lines 31 - 36,
The test mutates the module-scoped constant (constants.TIMEOUT) and never
restores it; save the original value before overriding and restore it after the
suite: e.g., capture const originalTimeout = (constants as any).TIMEOUT, set
(constants as any).TIMEOUT = 30000 in the existing setup, and add an afterAll
(or finally in the suite) that resets (constants as any).TIMEOUT =
originalTimeout so other tests aren’t affected.
| // TODO: Custom token shielded outputs fail with "tokens melted, but there is no melt authority input". | ||
| // The phantom output trick in sendTransaction.ts balances UTXO selection, but when phantoms are | ||
| // removed the custom token inputs exceed the transparent outputs, looking like a melt to the fullnode. | ||
| // The shielded output amounts need to be accounted for in the token balance equation. | ||
| it('should send shielded outputs with a custom token (AmountShielded)', async () => { | ||
| const walletA = await generateWalletHelper(); | ||
| const walletB = await generateWalletHelper(); | ||
|
|
||
| // Fund walletA and create a custom token | ||
| const addrA = await walletA.getAddressAtIndex(0); | ||
| await GenesisWalletHelper.injectFunds(walletA, addrA, 20n); | ||
| const tokenResp = await createTokenHelper(walletA, 'ShieldedToken', 'SHT', 1000n); | ||
| const tokenUid = tokenResp.hash; | ||
|
|
||
| // Send shielded outputs of the custom token to walletB | ||
| const shieldedAddrB0 = await walletB.getAddressAtIndex(0, { legacy: false }); | ||
| const shieldedAddrB1 = await walletB.getAddressAtIndex(1, { legacy: false }); | ||
|
|
||
| const tx = await walletA.sendManyOutputsTransaction([ | ||
| { | ||
| address: shieldedAddrB0, | ||
| value: 300n, | ||
| token: tokenUid, | ||
| shielded: ShieldedOutputMode.AMOUNT_SHIELDED, | ||
| }, | ||
| { | ||
| address: shieldedAddrB1, | ||
| value: 200n, | ||
| token: tokenUid, | ||
| shielded: ShieldedOutputMode.AMOUNT_SHIELDED, | ||
| }, | ||
| ]); | ||
| expect(tx).not.toBeNull(); | ||
| await waitForTxReceived(walletB, tx!.hash!); | ||
|
|
||
| // WalletB should see the custom token balance from decrypted shielded outputs | ||
| const balanceB = await walletB.getBalance(tokenUid); | ||
| expect(balanceB[0].balance.unlocked).toBe(500n); | ||
|
|
||
| // WalletA should have the remaining custom tokens | ||
| const balanceA = await walletA.getBalance(tokenUid); | ||
| expect(balanceA[0].balance.unlocked).toBe(500n); | ||
| }); | ||
|
|
||
| // TODO: Same issue as AmountShielded custom token test above. | ||
| it('should send FullShielded outputs with a custom token', async () => { | ||
| const walletA = await generateWalletHelper(); | ||
| const walletB = await generateWalletHelper(); | ||
|
|
||
| const addrA = await walletA.getAddressAtIndex(0); | ||
| await GenesisWalletHelper.injectFunds(walletA, addrA, 20n); | ||
| const tokenResp = await createTokenHelper(walletA, 'FullShieldToken', 'FST', 1000n); | ||
| const tokenUid = tokenResp.hash; | ||
|
|
||
| const shieldedAddrB0 = await walletB.getAddressAtIndex(0, { legacy: false }); | ||
| const shieldedAddrB1 = await walletB.getAddressAtIndex(1, { legacy: false }); | ||
|
|
||
| const tx = await walletA.sendManyOutputsTransaction([ | ||
| { | ||
| address: shieldedAddrB0, | ||
| value: 400n, | ||
| token: tokenUid, | ||
| shielded: ShieldedOutputMode.FULLY_SHIELDED, | ||
| }, | ||
| { | ||
| address: shieldedAddrB1, | ||
| value: 100n, | ||
| token: tokenUid, | ||
| shielded: ShieldedOutputMode.FULLY_SHIELDED, | ||
| }, | ||
| ]); | ||
| expect(tx).not.toBeNull(); | ||
| await waitForTxReceived(walletB, tx!.hash!); | ||
|
|
||
| const balanceB = await walletB.getBalance(tokenUid); | ||
| expect(balanceB[0].balance.unlocked).toBe(500n); | ||
| }); |
There was a problem hiding this comment.
Don’t leave known-failing custom-token cases enabled.
The TODO here says these flows still fail because shielded amounts are missing from the token-balance equation, but both specs are still active. That will keep the suite red until the production fix lands.
Do you want me to draft a minimal follow-up that marks these as skipped with the TODO reason, or would you rather land the token-balance fix in this PR?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__tests__/integration/shielded_transactions.test.ts` around lines 608 - 684,
Two integration tests are known to fail because shielded token amounts are not
included in the token-balance equation; mark the two cases as skipped to keep
the suite green. Locate the two tests named "should send shielded outputs with a
custom token (AmountShielded)" and "should send FullShielded outputs with a
custom token" in shielded_transactions.test.ts and change them to skipped tests
(e.g., it.skip or test.skip) with a short message referencing the TODO
("shielded amounts missing from token-balance equation") so the reason is
recorded; restore them once the token-balance fix (accounting for shielded
outputs) is implemented.
| jest.mock('../../src/utils/transaction', () => ({ | ||
| getSignatureForTx: jest.fn(), | ||
| isShieldedOutputEntry: jest.fn().mockReturnValue(false), | ||
| })); |
There was a problem hiding this comment.
Add one positive shielded-output case to this suite.
Line 20 pins isShieldedOutputEntry to false for every test here, so the new shielded branch in WalletServiceStorageProxy can regress without this file noticing. A companion case that returns true and asserts the proxied tx shape would close that gap.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__tests__/wallet/walletServiceStorageProxy.test.ts` around lines 18 - 21, Add
a positive shielded-output test that flips the existing mock of
isShieldedOutputEntry to return true for that single test and asserts the
proxied transaction shape produced by WalletServiceStorageProxy; specifically,
in the new test use the mocked transaction util (e.g., require or jest.spyOn on
isShieldedOutputEntry) to mockReturnValue(true), call the
WalletServiceStorageProxy method that creates/proxies the tx, and assert the
resulting proxied tx includes the expected shielded-output-specific
fields/structure while leaving getSignatureForTx behavior unchanged.
| // Extract blinding factors from shielded inputs for the homomorphic balance equation. | ||
| const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = []; | ||
| for (const inp of allInputs) { | ||
| const utxo = await this.storage.getUtxo({ | ||
| txId: inp.txId, | ||
| index: inp.index, | ||
| }); | ||
| if (utxo?.shielded && utxo.blindingFactor) { | ||
| blindedInputsArr.push({ | ||
| value: utxo.value, | ||
| vbf: Buffer.from(utxo.blindingFactor, 'hex'), | ||
| gbf: utxo.assetBlindingFactor | ||
| ? Buffer.from(utxo.assetBlindingFactor, 'hex') | ||
| : Buffer.alloc(32, 0), | ||
| }); |
There was a problem hiding this comment.
Fail fast if a selected shielded input has incomplete blinding metadata.
This loop silently drops shielded inputs when blindingFactor is missing, and it also substitutes an all-zero asset blinding factor when assetBlindingFactor is absent. That means createShieldedOutputs() can build proofs against the wrong blinded-input set instead of rejecting an inconsistent wallet state.
Suggested fix
const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = [];
for (const inp of allInputs) {
const utxo = await this.storage.getUtxo({
txId: inp.txId,
index: inp.index,
});
- if (utxo?.shielded && utxo.blindingFactor) {
+ if (utxo?.shielded) {
+ if (!utxo.blindingFactor) {
+ throw new SendTxError(
+ `Missing blinding metadata for shielded input ${inp.txId}:${inp.index}.`
+ );
+ }
blindedInputsArr.push({
value: utxo.value,
vbf: Buffer.from(utxo.blindingFactor, 'hex'),
gbf: utxo.assetBlindingFactor
? Buffer.from(utxo.assetBlindingFactor, 'hex')
: Buffer.alloc(32, 0),
});
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Extract blinding factors from shielded inputs for the homomorphic balance equation. | |
| const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = []; | |
| for (const inp of allInputs) { | |
| const utxo = await this.storage.getUtxo({ | |
| txId: inp.txId, | |
| index: inp.index, | |
| }); | |
| if (utxo?.shielded && utxo.blindingFactor) { | |
| blindedInputsArr.push({ | |
| value: utxo.value, | |
| vbf: Buffer.from(utxo.blindingFactor, 'hex'), | |
| gbf: utxo.assetBlindingFactor | |
| ? Buffer.from(utxo.assetBlindingFactor, 'hex') | |
| : Buffer.alloc(32, 0), | |
| }); | |
| // Extract blinding factors from shielded inputs for the homomorphic balance equation. | |
| const blindedInputsArr: Array<{ value: bigint; vbf: Buffer; gbf: Buffer }> = []; | |
| for (const inp of allInputs) { | |
| const utxo = await this.storage.getUtxo({ | |
| txId: inp.txId, | |
| index: inp.index, | |
| }); | |
| if (utxo?.shielded) { | |
| if (!utxo.blindingFactor) { | |
| throw new SendTxError( | |
| `Missing blinding metadata for shielded input ${inp.txId}:${inp.index}.` | |
| ); | |
| } | |
| blindedInputsArr.push({ | |
| value: utxo.value, | |
| vbf: Buffer.from(utxo.blindingFactor, 'hex'), | |
| gbf: utxo.assetBlindingFactor | |
| ? Buffer.from(utxo.assetBlindingFactor, 'hex') | |
| : Buffer.alloc(32, 0), | |
| }); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/new/sendTransaction.ts` around lines 411 - 425, The loop in
sendTransaction.ts that builds blindedInputsArr silently drops or substitutes
missing blinding metadata; instead, when this.storage.getUtxo returns a shielded
utxo, validate that utxo.blindingFactor and utxo.assetBlindingFactor are present
and if either is missing throw an error (or reject) immediately so we fail fast;
update the loop around allInputs (the code that populates blindedInputsArr) to
throw with a clear message referencing the offending inp (txId/index) when
utxo.shielded && (!utxo.blindingFactor || !utxo.assetBlindingFactor), and remove
the fallback Buffer.alloc(32,0) behavior so createShieldedOutputs uses only
complete blindedInputsArr.
| /** | ||
| * Parse address and return the address type | ||
| * Parse address and return the address type. | ||
| * Returns 'p2pkh' or 'p2sh' for legacy addresses. | ||
| * Throws for shielded addresses — callers expecting an output script type | ||
| * should not receive shielded addresses directly. | ||
| * | ||
| * @param {string} address | ||
| * @param {Network} network | ||
| * | ||
| * @returns {string} output type of the address (p2pkh or p2sh) | ||
| * @returns {'p2pkh' | 'p2sh'} output type of the address | ||
| */ | ||
| export function getAddressType(address: string, network: Network): 'p2pkh' | 'p2sh' { | ||
| const addressObj = new Address(address, { network }); | ||
| return addressObj.getType(); | ||
| const addrType = addressObj.getType(); | ||
| if (addrType === 'shielded') { | ||
| throw new Error( | ||
| 'Shielded addresses cannot be used directly as output script type. Use the spend-derived P2PKH address instead.' | ||
| ); | ||
| } | ||
| return addrType; | ||
| } |
There was a problem hiding this comment.
Don't make getAddressType() throw on a valid shielded address.
This helper reads like a classifier, but it now rejects a supported address kind. That is a breaking semantic change for callers that only need to branch on the parsed type. Keep the shielded rejection in the script-building paths instead, where the transparent-only restriction actually applies.
Suggested direction
-export function getAddressType(address: string, network: Network): 'p2pkh' | 'p2sh' {
+export function getAddressType(
+ address: string,
+ network: Network
+): 'p2pkh' | 'p2sh' | 'shielded' {
const addressObj = new Address(address, { network });
- const addrType = addressObj.getType();
- if (addrType === 'shielded') {
- throw new Error(
- 'Shielded addresses cannot be used directly as output script type. Use the spend-derived P2PKH address instead.'
- );
- }
- return addrType;
+ return addressObj.getType();
}Then keep the transparent-output restriction at callers that actually need a script type.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/address.ts` around lines 23 - 43, getAddressType currently throws
for shielded addresses; change it to classify instead: remove the throw in
getAddressType (and return the literal 'shielded' when addressObj.getType() ===
'shielded'), and widen the function signature to return 'p2pkh' | 'p2sh' |
'shielded' so callers can branch on the parsed type. Leave enforcement of
"transparent-only" in the call sites that build output scripts (instead of here)
— update those script-building callers to check for 'shielded' and throw or
handle as appropriate.
…ge cases - Test selectUtxos shielded filter (true/false/undefined) in MemoryStore - Test getSignatureForTx uses spend key chain for shielded-spend addresses - Test deriveShieldedAddressFromStorage returns null without shielded keys - Test read-only wallet gracefully handles shielded outputs Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…rectness Security & validation: - Fix wrong signing path comment (account 2', not 1') - Zero private key buffers after ECDH use (defense-in-depth) - Validate recovered shielded value is positive after rewind - Throw on missing blindingFactor for shielded inputs - Throw on unknown mode in serialize/serializeSighash - Guard resolveTokenUid against FullShielded misuse Deduplication & code quality: - Extract normalizeShieldedOutputs shared utility - Replace magic numbers with ShieldedOutputMode enum - Move ZERO_TWEAK to shared constants - Use typed parameter for isShieldedOutputEntry Correctness: - Restore missing saveAddress for on-demand legacy derivation - Propagate timelock through shielded output creation - Only advance shielded cursor for 'shielded' type, not 'shielded-spend' - Clarify WS subscription strategy comment - Remove shadowed network variable in tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@coderabbitai once again. All your comments were addressed already. Resolve everything and run a full review |
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
__tests__/integration/helpers/wallet.helper.ts (1)
327-338:⚠️ Potential issue | 🟠 Major
processHistoryis invoked too late to prevent the timeout path.The current flow waits for
processingStatus === FINISHEDfirst, but the timeout failures indicate history may need processing before that state is reached. Line 337 won’t run in that failure path.🩹 Suggested fix
- while (!storageTx || storageTx.processingStatus !== TxHistoryProcessingStatus.FINISHED) { + while (!storageTx || storageTx.processingStatus !== TxHistoryProcessingStatus.FINISHED) { if (timeoutReached) { break; } + if (storageTx && storageTx.processingStatus !== TxHistoryProcessingStatus.FINISHED) { + await hWallet.storage.processHistory(hWallet.pinCode ?? undefined); + } + // Tx not found, wait 1s before trying again await delay(1000); storageTx = (await hWallet.getTx(txId)) as IHistoryTx; } @@ - await hWallet.storage.processHistory(hWallet.pinCode || undefined); + await hWallet.storage.processHistory(hWallet.pinCode ?? undefined);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/integration/helpers/wallet.helper.ts` around lines 327 - 338, The timeout happens because processHistory is only called after waiting for processingStatus === FINISHED, so when the timeout path is taken line invoking hWallet.storage.processHistory(...) never runs; change the flow so that after marking inputs spent you always invoke and await hWallet.storage.processHistory(hWallet.pinCode || undefined) (i.e., call processHistory immediately after updateInputsSpentBy(hWallet, storageTx) whenever storageTx.is_voided === false) rather than waiting for processingStatus to reach FINISHED, ensuring history is processed in the timeout path as well.
♻️ Duplicate comments (12)
src/utils/bigint.ts (1)
29-48:⚠️ Potential issue | 🟠 MajorOptional chaining here doesn’t prevent missing-context runtime failures.
Line 47 now uses
context?.source, but Line 36 still doesBigInt(context.source). In runtimes where the revivercontextarg is absent, this throws before theSyntaxErrorbranch, so parsing can fail for numeric fields.Suggested fix
- bigIntReviver(_key: string, value: any, context: { source: string }): any { + bigIntReviver(_key: string, value: any, context?: { source: string }): any { if (typeof value !== 'number') { // No special handling needed for non-number values. return value; } + + // Some runtimes do not provide the reviver context argument. + if (!context?.source) { + if (Number.isInteger(value) && !Number.isSafeInteger(value)) { + throw new Error( + 'JSONBigInt.parse cannot safely recover an unsafe integer when reviver context is unavailable' + ); + } + return value; + } try { const bigIntValue = BigInt(context.source); @@ - (e.message === `Cannot convert ${context?.source} to a BigInt` || + (e.message === `Cannot convert ${context.source} to a BigInt` || e.message === `invalid BigInt syntax`)Does JSON.parse reviver provide a third `context` argument (with `source`) in Node.js 22, V8, and Hermes/React Native runtimes?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/bigint.ts` around lines 29 - 48, bigIntReviver currently calls BigInt(context.source) without ensuring context or context.source exists, which can throw before the catch; change bigIntReviver to first determine a safe input string for BigInt by checking if context and context.source are present (e.g., typeof context?.source === 'string') and use that, otherwise fall back to String(value) or another safe representation, then call BigInt on that input; also update the catch branch to compare error messages against generic BigInt errors without assuming context.source is available (remove context?.source usage in error message comparisons) so the reviver is safe when the third argument is missing.__tests__/integration/hathorwallet_facade.test.ts (1)
70-70:⚠️ Potential issue | 🟡 MinorAvoid
nonce > 0in mining assertion (flaky).Line 70 reintroduces a brittle check; nonce can be
0in valid mined transactions.✅ Suggested assertion update
- expect(tx.nonce).toBeGreaterThan(0); + expect(tx.nonce).toEqual(expect.any(Number)); + expect(tx.parents.length).toBeGreaterThan(0);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/integration/hathorwallet_facade.test.ts` at line 70, The test currently asserts a mined transaction nonce with expect(tx.nonce).toBeGreaterThan(0), which is flaky because nonce may be 0; update the assertion for tx.nonce in __tests__/integration/hathorwallet_facade.test.ts to allow zero (e.g., assert it is a number and >= 0 or use toBeGreaterThanOrEqual(0)) so the test only fails for negative or non-numeric nonces; adjust any surrounding comments to reflect the new expectation.package.json (1)
17-17:⚠️ Potential issue | 🟠 MajorRelease-blocker: avoid
file:runtime dependency in published package.Line 17 points to a local sibling path, which breaks installs outside this exact checkout layout. Keep this only for local integration branches; before release, replace with a published semver (or an internal registry tarball URL).
🔧 Suggested change
- "@hathor/ct-crypto-node": "file:../hathor-core/hathor-ct-crypto/npm", + "@hathor/ct-crypto-node": "<published-version>",#!/bin/bash set -euo pipefail echo "Runtime dependencies that use file: references" jq -r '.dependencies // {} | to_entries[] | select(.value | startswith("file:")) | "\(.key)=\(.value)"' package.json echo echo "Where `@hathor/ct-crypto-node` is imported (to confirm runtime need):" rg -nP "['\"]@hathor/ct-crypto-node['\"]"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 17, The package.json currently declares a runtime dependency using a local file reference "@hathor/ct-crypto-node": "file:../hathor-core/hathor-ct-crypto/npm"; replace that file: reference with a published semver (or an internal-registry/tarball URL) suitable for releases, update any lockfile accordingly, and verify no other dependencies use "file:" by searching for the string and confirming imports of "@hathor/ct-crypto-node" (to ensure it's a runtime dependency) before publishing.__tests__/wallet/walletServiceStorageProxy.test.ts (1)
18-21:⚠️ Potential issue | 🟡 MinorAdd one positive
isShieldedOutputEntrypath test.Line 20 locks the helper to
falsefor the whole suite, so shielded-output conversion behavior can regress without detection. Please add a companion case that flips this mock totrueand asserts the proxied tx output shape.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/wallet/walletServiceStorageProxy.test.ts` around lines 18 - 21, The suite globally mocks isShieldedOutputEntry to always return false, so add a new positive test in __tests__/wallet/walletServiceStorageProxy.test.ts that flips the mock to return true for that case (e.g., using (isShieldedOutputEntry as jest.Mock).mockReturnValueOnce(true) or jest.requireMock(...).isShieldedOutputEntry.mockReturnValueOnce(true)), call the same proxy logic under test (the wallet storage proxy invocation used in other tests) and assert the proxied transaction outputs match the shielded-output shape expected (verify fields/structure that differ when isShieldedOutputEntry returns true).src/utils/wallet.ts (2)
710-729:⚠️ Potential issue | 🟠 MajorSame multisig concern applies here.
generateAccessDataFromSeedunconditionally derives shielded keys for all wallet types. Apply the samewalletType === WalletType.P2PKHguard to avoid storing single-sig shielded keys on multisig wallets.Proposed fix
- // Derive shielded scan (account 1') and spend (account 2') keys. - // Separate from legacy (account 0') so scan key only grants view access. - const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH); - const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0); - const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH); - const spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0); + // Derive shielded scan (account 1') and spend (account 2') keys only for P2PKH wallets. + let scanXpriv: HDPrivateKey | undefined; + let spendXpriv: HDPrivateKey | undefined; + if (walletType === WalletType.P2PKH) { + const scanAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH); + scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0); + const spendAcctXpriv = rootXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH); + spendXpriv = spendAcctXpriv.deriveNonCompliantChild(0); + } return { walletType, multisigData, xpubkey: xpriv.xpubkey, mainKey: encryptedMainKey, acctPathKey: encryptedAcctPathKey, authKey: encryptedAuthPathKey, words: encryptedWords, walletFlags: 0, - scanXpubkey: scanXpriv.xpubkey, - scanMainKey: encryptData(scanXpriv.xprivkey, pin), - spendXpubkey: spendXpriv.xpubkey, - spendMainKey: encryptData(spendXpriv.xprivkey, pin), + ...(scanXpriv && { + scanXpubkey: scanXpriv.xpubkey, + scanMainKey: encryptData(scanXpriv.xprivkey, pin), + }), + ...(spendXpriv && { + spendXpubkey: spendXpriv.xpubkey, + spendMainKey: encryptData(spendXpriv.xprivkey, pin), + }), };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/wallet.ts` around lines 710 - 729, The code in generateAccessDataFromSeed always derives and stores shielded keys (scanXpriv, spendXpriv, scanMainKey, spendMainKey) regardless of wallet type; change it to only derive/encrypt/populate scanXpubkey/scanMainKey and spendXpubkey/spendMainKey when walletType === WalletType.P2PKH (i.e., single-sig). Wrap the existing derivation and the return fields scanXpubkey, scanMainKey, spendXpubkey, spendMainKey behind that guard and ensure for non-P2PKH (multisig) those properties are omitted or set to null/undefined consistently so multisig wallets do not store single-sig shielded keys.
624-637:⚠️ Potential issue | 🟠 MajorDo not derive single-sig shielded keys for multisig wallets.
The condition
if (argXpriv.depth === 0)derives shielded keys regardless of wallet type. ForWalletType.MULTISIG, this stores single-sig shielded keys (derived fromm/44'/280'/1'/0andm/44'/280'/2'/0) that one signer controls alone, which is inconsistent with the wallet's multisig signing model.Proposed fix
- if (argXpriv.depth === 0) { + if (argXpriv.depth === 0 && walletType === WalletType.P2PKH) { const scanAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SCAN_ACCT_PATH); const scanXpriv = scanAcctXpriv.deriveNonCompliantChild(0); accessData.scanXpubkey = scanXpriv.xpubkey; accessData.scanMainKey = encryptData(scanXpriv.xprivkey, pin); const spendAcctXpriv = argXpriv.deriveNonCompliantChild(SHIELDED_SPEND_ACCT_PATH); const spendXpriv2 = spendAcctXpriv.deriveNonCompliantChild(0); accessData.spendXpubkey = spendXpriv2.xpubkey; accessData.spendMainKey = encryptData(spendXpriv2.xprivkey, pin); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/wallet.ts` around lines 624 - 637, The code currently derives and stores single-sig shielded keys from argXpriv (using deriveNonCompliantChild with SHIELDED_SCAN_ACCT_PATH and SHIELDED_SPEND_ACCT_PATH, then encryptData) regardless of wallet type; change the condition to skip this entire derivation block when the wallet is a multisig (check the wallet type variable, e.g., wallet.type or walletType !== WalletType.MULTISIG) so you do not set accessData.scanXpubkey, accessData.scanMainKey, accessData.spendXpubkey, or accessData.spendMainKey for WalletType.MULTISIG; keep the existing behavior for non-multisig wallets and leave all deriveNonCompliantChild/encryptData calls unchanged inside the guarded block.src/shielded/processing.ts (1)
130-145:⚠️ Potential issue | 🟡 MinorMissing runtime validation for asset_commitment on FullShielded outputs.
The code at line 133 uses a non-null assertion (
shieldedOutput.asset_commitment!) but relies only on the mode check. Sinceasset_commitmentis optional in the type definition, a malformed fullnode response withmode === FULLY_SHIELDEDbut missingasset_commitmentwould cause a runtime error when callingBuffer.from(undefined, 'hex').Suggested fix
if (isFullShielded) { // FullShielded: rewind recovers token UID and asset blinding factor - // asset_commitment is guaranteed for FullShielded outputs (protocol invariant) - const assetCommitment = Buffer.from(shieldedOutput.asset_commitment!, 'hex'); + // Validate asset_commitment exists (required for FullShielded) + if (!shieldedOutput.asset_commitment) { + storage.logger.warn( + `FullShielded output missing asset_commitment for tx ${tx.tx_id} ` + + `output ${transparentCount + idx} — skipping` + ); + continue; + } + const assetCommitment = Buffer.from(shieldedOutput.asset_commitment, 'hex'); const result = await cryptoProvider.rewindFullShieldedOutput(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shielded/processing.ts` around lines 130 - 145, Check and validate that shieldedOutput.asset_commitment is present before using the non-null assertion in the isFullShielded branch: in the block where you call cryptoProvider.rewindFullShieldedOutput (inside the isFullShielded condition), guard against a missing or falsy shieldedOutput.asset_commitment and throw or return a clear error (or skip processing) if it's undefined/invalid, rather than calling Buffer.from(shieldedOutput.asset_commitment!, 'hex'); this ensures recoveredAbf/recoveredTokenUid logic only runs when asset_commitment is a valid hex string.src/utils/transaction.ts (1)
708-738:⚠️ Potential issue | 🟠 MajorValidate FULLY_SHIELDED-specific fields before building the model.
The validation at lines 710-714 only checks common crypto fields. A
FULLY_SHIELDEDentry with missingassetCommitmentorsurjectionProofwill still construct aShieldedOutputand fail later during serialization or processing.🧪 Suggested fix
const shieldedModels = txData.shieldedOutputs.map(so => { if (!so.commitment || !so.rangeProof || !so.script || !so.ephemeralPubkey) { throw new Error( 'Shielded output missing required crypto fields (commitment, rangeProof, script, ephemeralPubkey)' ); } + if ( + so.mode === ShieldedOutputMode.FULLY_SHIELDED && + (!so.assetCommitment || !so.surjectionProof) + ) { + throw new Error( + 'Fully shielded output missing required crypto fields (assetCommitment, surjectionProof)' + ); + } const tokenData = this.getTokenDataFromOutput(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/transaction.ts` around lines 708 - 738, The shielded output mapping in the transaction builder currently only validates common crypto fields before constructing a ShieldedOutput; update the validation inside the txData.shieldedOutputs.map callback (where ShieldedOutput is constructed) to also require assetCommitment and surjectionProof when so.mode indicates FULLY_SHIELDED (or the equivalent constant), throwing a clear Error if those fields are missing, so the getTokenDataFromOutput/ShieldedOutput construction won't proceed with incomplete data.__tests__/integration/shielded_transactions.test.ts (2)
33-36:⚠️ Potential issue | 🟠 MajorRestore the shared timeout after this suite.
This mutates
src/constantsat module scope and never restores it, so later integration files in the same worker inherit the 30s Axios timeout and become order-dependent.🛠️ Suggested fix
import * as constants from '../../src/constants'; -// Increase Axios timeout for test environment — the fullnode is under load from continuous mining. -// TIMEOUT is declared as `const` so we must cast to override it in tests. -// eslint-disable-next-line `@typescript-eslint/no-explicit-any` -(constants as any).TIMEOUT = 30000; +const originalTimeout = constants.TIMEOUT; describe('shielded transactions', () => { + beforeAll(() => { + // Increase Axios timeout for this suite only — the fullnode is under load from continuous mining. + // eslint-disable-next-line `@typescript-eslint/no-explicit-any` + (constants as any).TIMEOUT = 30000; + }); + + afterAll(() => { + // eslint-disable-next-line `@typescript-eslint/no-explicit-any` + (constants as any).TIMEOUT = originalTimeout; + }); + jest.setTimeout(300_000);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/integration/shielded_transactions.test.ts` around lines 33 - 36, You mutate the module-scoped constant TIMEOUT on the imported constants object and never restore it, causing test order dependence; update the test to save the original value of constants.TIMEOUT before setting (e.g., const originalTimeout = (constants as any).TIMEOUT), set (constants as any).TIMEOUT = 30000 for this suite, and restore it in an afterAll/afterEach hook (or a finally block) by reassigning (constants as any).TIMEOUT = originalTimeout so subsequent tests see the original timeout.
608-650:⚠️ Potential issue | 🟡 MinorMark custom token shielded tests as skipped until the balance equation is fixed.
The phantom output mechanism creates transparent outputs for UTXO selection but removes them before finalizing the transaction. This causes custom token inputs to exceed transparent outputs, triggering a spurious "melt" error when the fullnode validates the homomorphic balance equation. The shielded output amounts must be integrated into the token balance calculation. Until this is resolved, mark both tests at lines 608-650 and 653-684 with
it.skip()and the existing TODO to prevent CI failures.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/integration/shielded_transactions.test.ts` around lines 608 - 650, Mark the two failing tests that exercise custom token shielded outputs as skipped: change the test declarations for "should send shielded outputs with a custom token (AmountShielded)" and the companion "should send shielded outputs with a custom token (ValueShielded)" to use it.skip(...) instead of it(...), preserving the existing TODO comment about phantom outputs and the balance-equation bug so CI won't run them until the token balance calculation (including shielded output amounts) is fixed.src/types.ts (1)
761-765:⚠️ Potential issue | 🟠 MajorThe shielded xpriv accessor signatures still hide the legacy-wallet absence case.
The docstring says these methods return
undefinedwhen the wallet predates shielded support, butgetScanXPrivKey()andgetSpendXPrivKey()still promisestring. That makes the interface lie to callers and defeats the compatibility guard you just documented.💡 Suggested signature fix
// Shielded key methods (return undefined if wallet was created before shielded feature) - getScanXPrivKey(pinCode: string): Promise<string>; - getSpendXPrivKey(pinCode: string): Promise<string>; + getScanXPrivKey(pinCode: string): Promise<string | undefined>; + getSpendXPrivKey(pinCode: string): Promise<string | undefined>; getScanXPubKey(): Promise<string | undefined>; getSpendXPubKey(): Promise<string | undefined>;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types.ts` around lines 761 - 765, Update the shielded xpriv accessor signatures so they match the documented behavior: change getScanXPrivKey(pinCode: string) and getSpendXPrivKey(pinCode: string) to return Promise<string | undefined> (like getScanXPubKey/getSpendXPubKey) so callers can handle the legacy-wallet (undefined) case.src/new/wallet.ts (1)
861-866:⚠️ Potential issue | 🟠 Major
getAddressPathForIndex()is still ambiguous for shielded spend addresses.Once both transparent and shielded-spend chains exist at the same index,
indexalone is no longer enough to reconstruct the derivation path. This method defaults to the legacy path unless callers remember to passopts, but existing helpers in this file still call it with only the index, so shielded UTXOs/input metadata can still exposem/.../0/...instead ofm/.../2/....Please make path resolution address-aware (e.g. derive from
addressType/address info) and update the remaining call sites to use that source of truth.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/new/wallet.ts` around lines 861 - 866, getAddressPathForIndex currently treats index alone as ambiguous for shielded spend addresses; change its signature to accept an address-aware discriminator (e.g. add an addressType or address string parameter to IAddressChainOptions) and use that to choose between the legacy path and the shielded spend path (referencing getAddressPathForIndex, IAddressChainOptions, and SHIELDED_SPEND_ACCT_PATH). Update all helper call sites in this file that call getAddressPathForIndex(index) to pass the derived addressType/address info instead of relying on the default, and ensure the function returns the correct m/.../2/... shielded spend path when addressType indicates shielded spend.
🧹 Nitpick comments (1)
__tests__/integration/shielded_transactions.test.ts (1)
1191-1230: Test name doesn't match behavior.The test claims to verify "read-only (xpub-only) wallet" behavior, but
walletBis created with a seed andDEFAULT_PIN_CODE, making it a full wallet capable of decryption. The comment at line 1226-1228 acknowledges this. Either:
- Rename the test to reflect what it actually tests (e.g., "should process shielded outputs on receiving wallet"), or
- Actually create an xpub-only wallet (no seed/pinCode) to test graceful degradation when decryption is impossible.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@__tests__/integration/shielded_transactions.test.ts` around lines 1191 - 1230, The test currently creates a full wallet (walletB) with a seed so it can decrypt shielded outputs; update the test to actually create an xpub-only/read-only wallet or rename the test. If you choose to make walletB read-only, call generateWalletHelper in xpub-only/read-only mode (use the precalculated wallet data from precalculationHelpers.test.getPrecalculatedWallet() and pass only preCalculatedAddresses/xpub or the helper's xpubOnly flag instead of seed/pinCode) so walletB cannot decrypt, then adjust the expected unlocked balance from 50n to 0n when asserting via walletB.getBalance(NATIVE_TOKEN_UID). Alternatively, rename the test to reflect that it currently verifies processing on a full receiving wallet (e.g., "should process shielded outputs on receiving wallet") and keep the existing walletB creation and expectation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@__tests__/utils/transaction.test.ts`:
- Around line 743-747: The test is pre-deriving child 0 which masks derivation
bugs; instead mock storage.getSpendXPrivKey to return the root spend xpriv (use
spendXpriv.xprivkey) rather than spendXpriv.deriveNonCompliantChild(0).xprivkey,
and change the test's bip32AddressIndex from 0 to a non-zero value (e.g., 5) so
the code path actually performs derivation; update any other similar
mocks/assertions in the block (lines referencing storage.getSpendXPrivKey,
spendXpriv.deriveNonCompliantChild, and bip32AddressIndex) to follow the same
pattern.
In `@src/new/wallet.ts`:
- Around line 3054-3058: fullTx may contain raw shielded outputs (from
getTxById) which are left un-hydrated by the current map and then cast to
IHistoryTx and passed to getTxBalance; getTxBalance expects normalized/decrypted
history-shaped outputs. Fix by normalizing shielded outputs before balance
calculation: when transactionUtils.isShieldedOutputEntry(output) is true,
convert that output into the wallet's internal history shape (the same
normalization/decryption used elsewhere) instead of leaving it raw, or
alternatively skip casting fullTx.tx to IHistoryTx and compute the balance for
shielded entries via a separate shielded-aware path; ensure references to
fullTx, transactionUtils.isShieldedOutputEntry, hydrateWithTokenUid, getTxById
and getTxBalance are updated so getTxBalance only receives normalized
IHistoryTx-shaped data.
In `@src/shielded/provider.ts`:
- Around line 29-47: The createAmountShieldedOutput implementation is missing
TypeScript parameter type annotations required by IShieldedCryptoProvider;
update the signature of createAmountShieldedOutput to explicitly annotate
parameters (value: bigint, recipientPubkey: Buffer, tokenUid: Buffer,
valueBlindingFactor: Buffer) while keeping the existing ICreatedShieldedOutput
return type so the method matches the interface and passes BigInt safely to the
native addon.
In `@src/types.ts`:
- Around line 262-273: The IHistoryInput interface currently requires value,
token_data, script, decoded, and token even though the comment says they may be
absent for shielded inputs; update IHistoryInput so those five properties are
optional (value?, token_data?, script?, decoded?, token?) while keeping tx_id
and index required, and then run/typecheck to find and guard any sites that
assume those fields are always present (e.g., usages of IHistoryInput.value or
.decoded).
---
Outside diff comments:
In `@__tests__/integration/helpers/wallet.helper.ts`:
- Around line 327-338: The timeout happens because processHistory is only called
after waiting for processingStatus === FINISHED, so when the timeout path is
taken line invoking hWallet.storage.processHistory(...) never runs; change the
flow so that after marking inputs spent you always invoke and await
hWallet.storage.processHistory(hWallet.pinCode || undefined) (i.e., call
processHistory immediately after updateInputsSpentBy(hWallet, storageTx)
whenever storageTx.is_voided === false) rather than waiting for processingStatus
to reach FINISHED, ensuring history is processed in the timeout path as well.
---
Duplicate comments:
In `@__tests__/integration/hathorwallet_facade.test.ts`:
- Line 70: The test currently asserts a mined transaction nonce with
expect(tx.nonce).toBeGreaterThan(0), which is flaky because nonce may be 0;
update the assertion for tx.nonce in
__tests__/integration/hathorwallet_facade.test.ts to allow zero (e.g., assert it
is a number and >= 0 or use toBeGreaterThanOrEqual(0)) so the test only fails
for negative or non-numeric nonces; adjust any surrounding comments to reflect
the new expectation.
In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 33-36: You mutate the module-scoped constant TIMEOUT on the
imported constants object and never restore it, causing test order dependence;
update the test to save the original value of constants.TIMEOUT before setting
(e.g., const originalTimeout = (constants as any).TIMEOUT), set (constants as
any).TIMEOUT = 30000 for this suite, and restore it in an afterAll/afterEach
hook (or a finally block) by reassigning (constants as any).TIMEOUT =
originalTimeout so subsequent tests see the original timeout.
- Around line 608-650: Mark the two failing tests that exercise custom token
shielded outputs as skipped: change the test declarations for "should send
shielded outputs with a custom token (AmountShielded)" and the companion "should
send shielded outputs with a custom token (ValueShielded)" to use it.skip(...)
instead of it(...), preserving the existing TODO comment about phantom outputs
and the balance-equation bug so CI won't run them until the token balance
calculation (including shielded output amounts) is fixed.
In `@__tests__/wallet/walletServiceStorageProxy.test.ts`:
- Around line 18-21: The suite globally mocks isShieldedOutputEntry to always
return false, so add a new positive test in
__tests__/wallet/walletServiceStorageProxy.test.ts that flips the mock to return
true for that case (e.g., using (isShieldedOutputEntry as
jest.Mock).mockReturnValueOnce(true) or
jest.requireMock(...).isShieldedOutputEntry.mockReturnValueOnce(true)), call the
same proxy logic under test (the wallet storage proxy invocation used in other
tests) and assert the proxied transaction outputs match the shielded-output
shape expected (verify fields/structure that differ when isShieldedOutputEntry
returns true).
In `@package.json`:
- Line 17: The package.json currently declares a runtime dependency using a
local file reference "@hathor/ct-crypto-node":
"file:../hathor-core/hathor-ct-crypto/npm"; replace that file: reference with a
published semver (or an internal-registry/tarball URL) suitable for releases,
update any lockfile accordingly, and verify no other dependencies use "file:" by
searching for the string and confirming imports of "@hathor/ct-crypto-node" (to
ensure it's a runtime dependency) before publishing.
In `@src/new/wallet.ts`:
- Around line 861-866: getAddressPathForIndex currently treats index alone as
ambiguous for shielded spend addresses; change its signature to accept an
address-aware discriminator (e.g. add an addressType or address string parameter
to IAddressChainOptions) and use that to choose between the legacy path and the
shielded spend path (referencing getAddressPathForIndex, IAddressChainOptions,
and SHIELDED_SPEND_ACCT_PATH). Update all helper call sites in this file that
call getAddressPathForIndex(index) to pass the derived addressType/address info
instead of relying on the default, and ensure the function returns the correct
m/.../2/... shielded spend path when addressType indicates shielded spend.
In `@src/shielded/processing.ts`:
- Around line 130-145: Check and validate that shieldedOutput.asset_commitment
is present before using the non-null assertion in the isFullShielded branch: in
the block where you call cryptoProvider.rewindFullShieldedOutput (inside the
isFullShielded condition), guard against a missing or falsy
shieldedOutput.asset_commitment and throw or return a clear error (or skip
processing) if it's undefined/invalid, rather than calling
Buffer.from(shieldedOutput.asset_commitment!, 'hex'); this ensures
recoveredAbf/recoveredTokenUid logic only runs when asset_commitment is a valid
hex string.
In `@src/types.ts`:
- Around line 761-765: Update the shielded xpriv accessor signatures so they
match the documented behavior: change getScanXPrivKey(pinCode: string) and
getSpendXPrivKey(pinCode: string) to return Promise<string | undefined> (like
getScanXPubKey/getSpendXPubKey) so callers can handle the legacy-wallet
(undefined) case.
In `@src/utils/bigint.ts`:
- Around line 29-48: bigIntReviver currently calls BigInt(context.source)
without ensuring context or context.source exists, which can throw before the
catch; change bigIntReviver to first determine a safe input string for BigInt by
checking if context and context.source are present (e.g., typeof context?.source
=== 'string') and use that, otherwise fall back to String(value) or another safe
representation, then call BigInt on that input; also update the catch branch to
compare error messages against generic BigInt errors without assuming
context.source is available (remove context?.source usage in error message
comparisons) so the reviver is safe when the third argument is missing.
In `@src/utils/transaction.ts`:
- Around line 708-738: The shielded output mapping in the transaction builder
currently only validates common crypto fields before constructing a
ShieldedOutput; update the validation inside the txData.shieldedOutputs.map
callback (where ShieldedOutput is constructed) to also require assetCommitment
and surjectionProof when so.mode indicates FULLY_SHIELDED (or the equivalent
constant), throwing a clear Error if those fields are missing, so the
getTokenDataFromOutput/ShieldedOutput construction won't proceed with incomplete
data.
In `@src/utils/wallet.ts`:
- Around line 710-729: The code in generateAccessDataFromSeed always derives and
stores shielded keys (scanXpriv, spendXpriv, scanMainKey, spendMainKey)
regardless of wallet type; change it to only derive/encrypt/populate
scanXpubkey/scanMainKey and spendXpubkey/spendMainKey when walletType ===
WalletType.P2PKH (i.e., single-sig). Wrap the existing derivation and the return
fields scanXpubkey, scanMainKey, spendXpubkey, spendMainKey behind that guard
and ensure for non-P2PKH (multisig) those properties are omitted or set to
null/undefined consistently so multisig wallets do not store single-sig shielded
keys.
- Around line 624-637: The code currently derives and stores single-sig shielded
keys from argXpriv (using deriveNonCompliantChild with SHIELDED_SCAN_ACCT_PATH
and SHIELDED_SPEND_ACCT_PATH, then encryptData) regardless of wallet type;
change the condition to skip this entire derivation block when the wallet is a
multisig (check the wallet type variable, e.g., wallet.type or walletType !==
WalletType.MULTISIG) so you do not set accessData.scanXpubkey,
accessData.scanMainKey, accessData.spendXpubkey, or accessData.spendMainKey for
WalletType.MULTISIG; keep the existing behavior for non-multisig wallets and
leave all deriveNonCompliantChild/encryptData calls unchanged inside the guarded
block.
---
Nitpick comments:
In `@__tests__/integration/shielded_transactions.test.ts`:
- Around line 1191-1230: The test currently creates a full wallet (walletB) with
a seed so it can decrypt shielded outputs; update the test to actually create an
xpub-only/read-only wallet or rename the test. If you choose to make walletB
read-only, call generateWalletHelper in xpub-only/read-only mode (use the
precalculated wallet data from
precalculationHelpers.test.getPrecalculatedWallet() and pass only
preCalculatedAddresses/xpub or the helper's xpubOnly flag instead of
seed/pinCode) so walletB cannot decrypt, then adjust the expected unlocked
balance from 50n to 0n when asserting via walletB.getBalance(NATIVE_TOKEN_UID).
Alternatively, rename the test to reflect that it currently verifies processing
on a full receiving wallet (e.g., "should process shielded outputs on receiving
wallet") and keep the existing walletB creation and expectation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cc88ebf5-ee9f-4341-a5be-eae6f9b29a42
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (49)
__tests__/headers/shielded_outputs.test.ts__tests__/integration/configuration/docker-compose.yml__tests__/integration/configuration/privnet.yml__tests__/integration/hathorwallet_facade.test.ts__tests__/integration/helpers/wallet.helper.ts__tests__/integration/shielded_transactions.test.ts__tests__/models/address.test.ts__tests__/models/network.test.ts__tests__/models/shielded_output.test.ts__tests__/models/transaction.test.ts__tests__/shielded/creation.test.ts__tests__/shielded/processing.test.ts__tests__/storage/memory_store.test.ts__tests__/storage/storage.test.ts__tests__/utils/address.test.ts__tests__/utils/transaction.test.ts__tests__/wallet/walletServiceStorageProxy.test.tspackage.jsonsetupTests-integration.jssrc/constants.tssrc/headers/parser.tssrc/headers/shielded_outputs.tssrc/headers/types.tssrc/lib.tssrc/models/address.tssrc/models/network.tssrc/models/shielded_output.tssrc/models/transaction.tssrc/new/sendTransaction.tssrc/new/types.tssrc/new/wallet.tssrc/schemas.tssrc/shielded/creation.tssrc/shielded/index.tssrc/shielded/processing.tssrc/shielded/provider.tssrc/shielded/types.tssrc/storage/memory_store.tssrc/storage/storage.tssrc/sync/stream.tssrc/types.tssrc/utils/address.tssrc/utils/bigint.tssrc/utils/shieldedAddress.tssrc/utils/storage.tssrc/utils/transaction.tssrc/utils/wallet.tssrc/wallet/types.tssrc/wallet/walletServiceStorageProxy.ts
💤 Files with no reviewable changes (1)
- setupTests-integration.js
| jest.spyOn(storage, 'getMainXPrivKey').mockResolvedValue(legacyXpriv.xprivkey); | ||
| jest | ||
| .spyOn(storage, 'getSpendXPrivKey') | ||
| .mockResolvedValue(spendXpriv.deriveNonCompliantChild(0).xprivkey); | ||
|
|
There was a problem hiding this comment.
Strengthen this test to avoid false positives on key derivation.
Line 746 pre-derives child 0 and the test also uses bip32AddressIndex: 0, which can pass even if the production path derivation is wrong. Prefer mocking the root spend xpriv and asserting with a non-zero index.
🔧 Suggested test hardening
- jest
- .spyOn(storage, 'getSpendXPrivKey')
- .mockResolvedValue(spendXpriv.deriveNonCompliantChild(0).xprivkey);
+ jest.spyOn(storage, 'getSpendXPrivKey').mockResolvedValue(spendXpriv.xprivkey);
@@
- bip32AddressIndex: 0,
+ bip32AddressIndex: 7,
@@
- expect(sigData.inputSignatures[0].addressIndex).toBe(0);
+ expect(sigData.inputSignatures[0].addressIndex).toBe(7);Also applies to: 753-777
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@__tests__/utils/transaction.test.ts` around lines 743 - 747, The test is
pre-deriving child 0 which masks derivation bugs; instead mock
storage.getSpendXPrivKey to return the root spend xpriv (use
spendXpriv.xprivkey) rather than spendXpriv.deriveNonCompliantChild(0).xprivkey,
and change the test's bip32AddressIndex from 0 to a non-zero value (e.g., 5) so
the code path actually performs derivation; update any other similar
mocks/assertions in the block (lines referencing storage.getSpendXPrivKey,
spendXpriv.deriveNonCompliantChild, and bip32AddressIndex) to follow the same
pattern.
Replace file: symlink with the published npm package. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Codecov Report❌ Patch coverage is ❌ Your patch check has failed because the patch coverage (68.03%) is below the target coverage (70.00%). You can increase the patch coverage or adjust the target coverage.
Additional details and impacted files@@ Coverage Diff @@
## master #1059 +/- ##
===========================================
- Coverage 87.93% 66.46% -21.47%
===========================================
Files 114 120 +6
Lines 8910 9489 +579
Branches 2030 2193 +163
===========================================
- Hits 7835 6307 -1528
- Misses 1047 3142 +2095
- Partials 28 40 +12 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
- Make IHistoryInput fields optional for shielded inputs that lack
value/token/decoded (spending hidden commitments)
- Add shielded-input guards in balance calculation loops across
storage.ts, transaction.ts, wallet.ts, and memory_store.ts
- Normalize shielded outputs in getTxById before getTxBalance call
- Add explicit type annotations to all provider.ts method parameters
- Use non-zero bip32AddressIndex in spend-key derivation test to
avoid false positives
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Make IHistoryInput fields optional for shielded inputs that lack
value/token/decoded (spending hidden commitments)
- Add shielded-input guards in balance calculation loops across
storage.ts, transaction.ts, wallet.ts, and memory_store.ts
- Normalize shielded outputs in getTxById before getTxBalance call
- Add explicit type annotations to all provider.ts method parameters
- Use non-zero bip32AddressIndex in spend-key derivation test to
avoid false positives
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@coderabbitai the last 2 commits addresses all your 4 comments. Please resolve everything and run a full review again |
|
✅ Actions performedFull review triggered. |
Shielded transaction tests create many wallets and perform crypto operations that take significantly longer than the previous 50-minute limit allowed. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
proof domain, idempotent decryption, and address iteration
- Fix surjection proof domain for FullShielded inputs: use asset_commitment
(blinded generator) instead of ZERO_TWEAK, matching fullnode verification
- Fix idempotent shielded decryption: skip re-decryption in processNewTx
when outputs were already decoded (prevents duplicate append on reprocess)
- Fix processSingleTx bounds check to account for shielded outputs
- Fix saveAddress: exclude shielded-spend from legacy cursor tracking (#19)
- Fix addressIter/addressCount: filter shielded addresses from legacy iteration
- Fix pinCode || to ?? for nullish coalescing (#26)
- Fix getCurrentAddress error message with chain type and index (#28)
- Add FullShielded-to-FullShielded chain integration test
- Increase CI integration test timeout to 120 minutes
- Document FullShielded fee constraint (TODO_FIX_30)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Shielded-output tests use many wallets per file; the 300-wallet pool ran out. Also bump TX_TIMEOUT_DEFAULT from 5s to 30s — shielded tx mining can take noticeably longer on CI. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…uite Rename shielded_transactions.test.ts to shielded_outputs/core.test.ts and split the scenarios into focused files: address derivation, concurrency, cross-wallet, crypto failures, display invariants, edge cases, mixed modes, token creation (K.1–K.9), unshielding (U.1–U.3), UTXO selector, wallet restart, and ws message ordering. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Transaction.calculateWeight hardcoded TX_WEIGHT_CONSTANTS, but the shielded testnet reports much lower min_tx_weight / min_tx_weight_k, which significantly reduces mining work. Accept an optional override so callers with access to storage.version.* can pass the network values, and add transactionUtils.getWeightConstantsFromStorage as the common entry point. Update every prepareToSend call site. Guard against division-by-zero when txMinWeightK is 0 — the 4/(1 + k/amount) term collapses to 4. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Extend fullnodeTxApiInputSchema to a union of transparent + shielded input schemas (commitment + range_proof only), and surface shielded_inputs/shielded_outputs on the tx schema with passthrough for forward-compatibility with new fullnode fields. Add transactionUtils.isShieldedInputEntry helper and use it at all call sites that hydrate token_data or read decoded fields — the shielded wire form omits those. In computeTxBalance, recover value/token/address for wallet-owned shielded inputs from the stored UTXO so debits flow through. Handle shielded addresses in createOutputScript by deriving the spend-pubkey P2PKH, matching createOutputScriptFromAddress. In HathorWallet.onNewTx, pre-enrich shielded inputs from the previously-stored tx before addTx overwrites them, trigger a full processHistory when shielded data arrives on a second delivery, and carry over previously-decoded shielded outputs across updates so the per-tx delta stays correct. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
processHistory now iterates chronologically (oldest first) and after each tx deletes UTXOs corresponding to its transparent inputs. Without this, cleanMetadata + reprocessing resurrects spent UTXOs because spent_by isn't always set on the origin output when shielded txs arrive out of order, and the next send picks a stale UTXO producing 'input already spent' at the fullnode. processSingleTx deletes the spent shielded UTXO when input.index targets a position normalized out of outputs[]. Previously it just continued, leaving the selector offering the spent entry. processNewTx hydrates missing input.token from token_data using the tx's tokens array (with the tx_id aliasing rule for CREATE_TOKEN_TX) and enriches shielded inputs from the on-disk UTXO, falling back to the pre-save tx snapshot on re-delivery and to the origin tx's output as a last-resort transparent enrichment. Without this, balance drifts upward on every processHistory cycle because input debits are skipped. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
The selector previously filtered out shielded UTXOs to avoid 'invalid surplus of HTR' at the fullnode for token-creation txs that accidentally pulled a commitment-hidden HTR input. hathor-core commit 75831f9a now skips the HTR surplus check for shielded txs (is_shielded() == has_shielded_inputs OR has_shielded_outputs), and ownership is enforced via P2PKH on the spend-derived key — so the filter is no longer needed and was blocking valid unshielding sends. Drop the shielded: false filter and revert the allowShielded plumbing it was gated on (sendTransaction no longer needs to build a tokensWithShieldedOutputs set). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Upgrades @hathor/ct-crypto-node to the newly-published 0.4.0 (constant-size range proofs + uniffi bindings available). Keeps ct-crypto-node as an optionalDependency so mobile consumers can skip the NAPI prebuild and provide their own shielded crypto provider (via Swift/uniffi).
Acceptance Criteria
Security Checklist
Summary by CodeRabbit
New Features
Tests
Chores